I have previously written quite a few post about how much I like the Platform-as-a-Service databases for SQL Server (and for databases in general), and I do like them quite a bit. But would I recommend them for all use cases and workloads? Heck no! At the moment there are some features and limitations in Azure SQL PaaS databases that, with some of hte SQL Server workloads I have seen, wouldn’t just work all that well.
Also when we look at Azure, there’s some really cool features available for VMs that you can start using today, which are making the good old VMs an interesting option.
What are the limitations of PaaS Databases?
There are couple that really limit the use cases for SQL Server PaaS in Azure, at least from my point of view. These are related to the resources you have, such as the CPU, storage and the memory. More precisely the problem I see with the PaaS is related to how these resources are connected to each other. To simplify it a bit, to get anything you first need to get CPU cores and the cores you get, tend to be slooooow.
Let’s start with the closest option to what we have for VMs, the Managed Instance.
Managed Instance CPU and Memory limitations
Managed Instances currently come with two hardware generations, Gen4 and Gen5. The Gen4 is, last I checked, being deprecated so we’re not really having too many viable options there at the moment. With Managed Instance the amount of memory you get is directly related to number of vCPUs you have.
|Processor Type||Intel E5-2673 v3 (Haswell) 2.4-GHz processors||Intel E5-2673 v4 (Broadwell) 2.3-GHz and Intel SP-8160 (Skylake) processors|
|Number of vCores||8, 16, 24||4, 8, 16, 24, 32, 40, 64, 80|
|Memory per vCore||7GB per vCore||5.1GB per vCore|
And you’re not getting a lot of it. In fact I don’t think I’ve ever seen this kind of memory configuration in any kind of VM hosting a SQL Server instance, would be actually quite nice to know where these numbers come from. I do know that Microsoft collects quite a lot of data from SQL Server deployments in Azure and in on-premises, so likely these values are based to some actual numbers they’re witnessing out in the wild.
The other notable information here is the clock speed. As of today you can find processors from the market that are nearing 4GHz in base clock speeds, and closer to 5GHz in turbo mode. Compared to these the options in Managed Instances are about as speedy as your average grandmother in a cold, snowy morning.
These same hardware and capacity limitations apply between General Purpose and Business Critical tiers, the real difference there being that with Business Critical you can get those resources also for the offloaded read-only workloads. So if you have application that can do read-only offloading, then you’re in a bit better situation.
What about the Managed Instance storage then?
There are limitations there as well, again nicely tied in with the number of vCores, but there’s also some serious throttling here to consider. These aren’t too bad though, you can get a decent amount of storage with relatively few vCPU cores.
|General Purpose (Gen5)||24GB per vCPU core||2TB for 4 vCPU cores|
8TB for other sizes
|Business Critical (Gen5)||Up to instance storage size||1 TB for 4, 8, 16 vCores|
2 TB for 24 vCores
4 TB for 32, 40, 64, 80 vCores
Well, they’re still pretty bad for Business Critical tier, but for General Purpose you’re OK from the perspective of size. But what about other performance characteristics?
|Tier||Data Throughput||Log Throughput||Latency|
|General Purpose (Gen5)||100-250MB/s per file||3MB/s per vCPU cores up to maximum of 22MB/s||5-10ms|
|Business Critical||Not limited||4MB/s per vCPU core up to maximum of 48MB/s||1-2ms|
These are bit more interesting, especially the transaction log throughput limitation can be quite surprising. Also if you’re dealing with General Purpose tier, you definitely need to pay some attention to database file layout, more the better! Except it’s not just the amount of database files, but here also the size matters…
In Managed Instance General Purpose tier each datafile is getting more IOPS and throughput based on how large they are.
|Up to 128GB||Up to 256GB||Up to 512GB||Up to 1TB||Up to 2TB||Up to 4TB||Up to 8TB|
|IOPS per file||500||1100||2300||5000||7500||7500||12500|
|Throughput per file||100MB/s||125MB/s||150MB/s||200MB/s||250MB/s||250MB/s||480MB/s|
Is this looking complicated enough? If you’re running General Purpose instances, it can easily become quite a task to figure out the optimal database file layouts to optimize the storage performance.
How does the SQL Database compare to this?
Azure SQL Database comes with two purchasing models, DTU or vCore based. For the sake of comparison, we’re now looking just the vCore model for single databases. For Azure SQL Database we have currently 4 tiers: Serverless, General Purpose, Business Critical and Hyperscale. General Purpose and Business Critical are similar to Managed Instance options, where Hyperscale and Serverless are something else entirely, and I won’t include them into this comparison.
Also similar to Managed Instances SQL Database comes with Gen4 and Gen5 hardware, with the Gen4 hardware being deprecated and not available in all regions. For this reason, we’ll look at Gen5 options only. As there’s quite a large number of different compute sizes, I’ll not create a full table for those. If you’re interested, you can go here and check the official Microsoft documentation on these.
There are 14 different sizes for vCore purchasing model that give you anything between 2 and 80 vCores. To summarize the resources for the Gen5 SQL Database.
|Tier||vCores||Memory||Max Log Rate (MB/s)||Max IOPS|
|General Purpose||2 vCores per size until the last 4 sizes, where progression is much higher, maximum 80 vCores||10GB of memory per size until the last 4 sizes, where progression is much higher and caps at 415,2GB||7,5MB/s, 15MB/s and 22,5MB/s for the first three, then caps at 30MB/s for everything else||640 per size until the last 4 sizes where it progresses much faster and caps at 25600 IOPS|
|Business Critical||Same as General Purpsoe||Same as General Purpose||24MB/s, 48MB/s, 72MB/s for the first three, then caps at 96MB/s for everything else||8000 per size until the last 4 sizes where the progression becomes much faster, caps at 204800|
Between Business Critical and General Purpose there’s also considerable difference in latencies. General Purpose has 5-7ms for writes and 5-10ms for reads, while Business Critical provides 1-2ms latency for both reads and writes. The CPUs for Gen4 and Gen5 are the same, horribly slow ones you get with Managed Instances.
It should also be noted that on top of the numbers there’s a lot of resource governance taking place, to quote Microsoft, for providing balanced Database-as-a-Service experience. You can read more about the resource governance from here, the illustration below from that page gives a quite a good idea on what that really means.
Again the memory vs vCore ratio is something awful, however there’s a good chance of this changing soon as there are currently memory and compute optimized versions in preview for Azure SQL Database. With the memory optimized one you’re getting 29GB of memory per vCore, with the maximum of 128 of vCores and nearly 4TB of memory. The M-series also has a slightly faster CPU model, running at 2,5GHz, which is making this quite an attractive option.
So how much does memory cost in Azure SQL Database? It costs a lot. Let’s say that you’d like to have a database with a 100GB of memory, in General Purpose that would cost 3,600USD a month and require 20 vCores. The same in Business Critical would end up costing you 9,900USD a month.
I did take a look at the M-Series, which is only available for Business Critical and with 128 vCores in Azure pricing calculator. That really blew my mind.
How do the VMs compare against PaaS databases?
This is a good question, they compare just fine in my opinion. Microsoft has put in quite an effort to make running VMs reasonable option. Today you can get some rather nice features for any SQL Server VM, not just the ones created from Azure Marketplace Images, that’ll make management a bit easier task.
With this, I am of course referring to SQL VM Resource Provider that brings some, as I like to call them, PaaS-lite features to your VMs. Features available through the SQL VM Resource Provider are:
- Automated Patching
- Automated Backup
- Licensing and Edition Flexibility
- Automated AlwaysOn AG configuration
There is also one more interesting preview feature out there that is really bridging the gap between PaaS and IaaS. Advanced Data Security for SQL Servers on Azure VMs. This brings the Vulnerability Assessment and Advanced Threat Protection capabilities from that were previously only available for PaaS databases to SQL Server VMs. Pretty sweet, huh?
And then you also have the flexibility of having plenty of VM families and sizes to pick from, allowing you to avoid compromises you need to do with PaaS. Also there’s no practical limitations for number of databases you can host in an instance, and even storage goes up to 256TB.
Platform-as-a-Databases are great, if I’ll ever do any new development work that would be the first option for me (or maybe I’d use something cloud native, like CosmosDB). However if you’re taking existing workloads from the on-premises to the cloud, then you need to really need to pay attention.
To me it feels like that Microsoft has moved away from pushing the PaaS options as the silver bullet for all the problems, and have really created all around strong portfolio of database services around SQL Server, including the VMs. This is an approach I really like myself, as it gives us so much more freedom to modernize our databases in Azure.
Thanks for reading!