Mitigating concerns for COTS application database migrations.

I spend a lot of my working time and effort to move on-premises databases to the cloud. When I am not doing that, I am most often spending it planning on how to do it more efficiently. While I think today almost everyone agrees with the benefits of going to the cloud, there are a couple of sentences I keep hearing over and over, when we’re planning to move databases to the cloud.

Cloud? Nnnnnnnooo!

“We haven’t tested it with version X, and can’t guarantee that it works!”

“We only support SQL Server databases running in Virtual Machines”

That, good reader, is the typical sound of a commercial-off-the-shelf (COTS) application making it’s way, kicking and screaming, towards the public cloud. Considering how often I hear these two things been said, it’s easy to end up thinking that public clouds are full of burning wrecks of old applications. However, my own experience from having migrated plenty of SQL Server workloads is, that, about 100% of the time, I don’t have problems with versions or PaaS services (well, except that one time).

In this post, I’ll write about a single feature that can be used as to ensure that your database will be just fine, or even better, and then something about Managed Instances.

What are COTS applications anyway?

Commercial-off-the-shelf (COTS) applications are software that are packaged and sold as ready to use, and very frequently, offer limited customization options. As it’s also typically something that is sold in large numbers, there’s commonly a very strict set of requirements for the application to run. From the vendor perspective, these requirements make a lot of sense, as it means that they don’t need to test their application for every possible combination of hardware and software that exists in the market.

However, from the consumer perspective, this comes with some issues. Very often, support for new versions of operating and database systems are released with the new version of the application, forcing you to wait to before you can upgrade your servers. And rather frequently, you also need to pay the vendor to get the upgrade, and to have their blessing to upgrade your own IT infrastructure. I don’t know about you, but having a 3rd party software vendor dictate when you can upgrade your IT infrastructure doesn’t sound too great. Even more so, when we live in the world with increasing number of cyber criminals targeting outdated and poorly protected systems, dragging behind with upgrades sounds less than optimal.

So, what can we do to overcome the version discussion? I’ve had some success in educating software vendors about the backward compatibility feature in SQL Server.

How does SQL Server handle backward compatibility?

By using compatibility levels. This feature has been in SQL Server forever, and as you can probably guess, it’s there to provide backward compatibility to older versions of SQL Server. And it goes way, way backwards. Since SQL Server 2008/2008 R2, Microsoft has kept backward compatibility to all the older versions, meaning that you can run a SQL Server 2019 (and soon 2021) with compatibility to SQL Server 2008.

Now, there are a couple of things to keep in mind when it comes to compatibility levels.

  • Compatibility level is set on the database level, and you can have databases using different compatibility levels in the same instance.
  • It impacts only T-SQL and query processing behavior, meaning that same query in similar hardware should get the similar query plan in old and new versions of SQL Server.
  • Compatibility level isn’t automatically upgraded, but needs to be set manually.

There is a one scenario that compatibility levels will not help with, and that is when a feature has been removed from the SQL Server. On a positive note, I haven’t seen Microsoft actually remove any features since 2008. They’ve certainly deprecated many, and are warning against using them, but they haven’t been removed. Over the last 3 years, I’ve only seen this happen once with a database coming from SQL Server 2005 (yeah, don’t ask) that couldn’t be migrated directly into anything “newer” than 2008 R2 without changing some stored procedures.

Compatibility level impact on performance

While compatibility level was not such a big deal from the performance standpoint originally, in more recent years and versions it actually has a bit more impact. The compatibility level 120 is where we first got the new Cardinality Estimator, and lately, there has been a good amount of performance optimizations and query processing features that have made their way into SQL Server. To understand the behavior, check the table below.

Comparison chart for compatibility level features in SQL Server.
Source: Microsoft Docs

With SQL Server 2017 and 2019 versions of SQL Server, we also got some nice features under the Intelligent Query Processing, such as Adaptive Joins, Scalar UDF Inlining, Batch Mode on Rowstore, Memory Grant Feedback, etc.

Decoupling supported versions from running versions

What the compatibility levels in SQL Server provide, is a great way to decouple the versions you’re running, from the versions that the application supports. It gives you the freedom to move to the latest and greatest, to ensure the security and reliability of your systems, while still providing the expected functionality to applications. You can upgrade OS and database system when you need to, and once the application runs a version that supports them, it’s a single line of T-SQL to change the compatibility level. It’s so simple, that most software vendors don’t have (good) counter-arguments for this approach.

What about the PaaS database offering?

If I can’t move a database by replicating a VM, going with Managed Instances is usually my go-to solution. Why? Because it takes away a lot of the pains that come with managing SQL Servers. No need to design and deploy high availability, install OS or SQL Server, manage backups, patching, etc. You just use it, and (almost) everything else is a Microsoft’s problem.

So, when someone is telling you that they only support SQL Server running in VM’s, that’s almost always not the case. Microsoft has made a good effort in providing close to 100% compatibility to on-premises SQL Server with Managed Instances, and if you check the list of differences between SQL VM and Managed Instance, you’ll note that it’s relatively short. And many of the features on this list, well, they’re not something you typically see used.

One of the main reasons to run Managed Instances is that they’re evergreen. The discussion then easily goes to the one we covered in the previous topic, compatibility levels. They work exactly as they do with SQL Server running in VM.

The most common reason for me to not use Managed Instance is not the lack of feature support, but the resource limitations of Managed Instances. Microsoft has been upping these over the time, and you can already use them to host some serious workloads, but in some cases, you will just need the flexibility of the VM anyway.

Wrapping it up

Thanks for reading! If you ever need to have these same conversations with the software vendors, hopefully, you can use some of these points to convince them to move on with the times.

Published by

One response to “Mitigating concerns for COTS application database migrations.”

Leave a Reply

%d bloggers like this: