Let’s continue with the storage performance topic a bit further. One thing that sometimes can be confusing is the storage performance with SQL Server Managed Instances. The reason for confusion comes from having two separate performance tiers (General Purpose and Business Critical), with different method of managing the IO performance.
In General Purpose the method to get better performance is to go with bigger files. In Business Critical the better performance comes with cost, by adding more vCores to your Managed Instance. There are also a couple other details to keep in mind, when figuring out what exactly you need for your workloads.
Tiers and Performance
Besides the different mechanism for managing performance, General Purpose and Business Critical tiers for Managed Instances are using two different storage solutions, that both have their pros and cons. General Purpose is exactly what it claims to be, a good all around solution for many workloads. The storage used by General Purpose tier is Azure blob storage, with typical latency of 5 to 10 milliseconds. The Business Critical tier on the other hand is making use of locally attached SSD that has typical latency of 1 to 2 milliseconds. So that’s quite a huge difference already when it comes to storage performance.
Another difference with General Purpose and Business Critical tiers comes from the resource governance. For General Purpose there is a max throughput for data files (capped at 250MBs) where Business Critical doesn’t have the same limit. Both tiers do have limit on log throughput, General Purpose is capped at 65 MB/s per database and 120 MB/s for the instance, while Business Critical is capped at 96 MB/s.
Again, in the case of General Purpose you control the log throughput by adjusting the file size. In the Business Critical tier the controlling mechanism for log throughput is the number of, you probably guessed it already, vCores. The log throughput limit for Business Critical tier also feels quite strict, 4 MB/s per vCore. This can be a nasty, and expensive, surprise if you’re churning a lot of data.
A quick summary of these limitations below.
General Purpose Throughput and IOPS limits
To wrap it up
While these limitations might seem strange at first, it’s good to think about why they’re there in the first place. Microsoft, or any other cloud vendor for that matter, don’t limit your resource consumption because they’re nasty but for quite the opposite reason. They do it to make sure that the PaaS database services they offer provide an expected level of performance and reliability for everyone. Compared to on-premises where we’re free to punish our storage system as much as we want, without needing to worry about the few million other databases running on the same platform, in the cloud we need to be good neighbors.
It should also be noted that things also improve over time. Less than a year ago Microsoft doubled the log throughput for Business Critical tier, going from 48 MB/s to 96 MB/s. This won’t be the only time we’ll see these numbers change either. I would expect that as we see adoption rates grow for Managed Instances and Microsoft learns more about how their product works through telemetry they collect, there would be more improvements and adjustments to resource limits.