For the last blog post of the year 2016 I chose something that has been bothering me a bit as of late. Over the past couple months I have come across a number of cases where, after migrating databases to a new server, the end users are reporting increasingly bad performance. What has been common to these situations is that the new server has more and faster CPU cores, more memory, faster disks and should offer better performance, not worse.
Now the first thing I usually do when troubleshooting performance issues, is to check what kind of hardware we have and how the SQL Server has been configured. This is a really simple step to start with: Run msinfo32 to get the hardware details and then query sys.configurations to see how the SQL Server is configured. In these cases, all of the SQL Server configurations were left to their default settings. The result: Systems running to dangerously low levels of available memory which leads to extensive paging of the memory to the disk, and some really wild-looking query parallelization issues.
The minimum configuration
Obviously there is no single configuration that would work for every system out there. There are however some configurations that should always be done, no matter what sort of workloads you are running or how small/large your databases are. The following is what I would consider to be the minimum configuration for any SQL Server instance out there. As a quick note, with SQL Server 2016 some of these settings are configured automatically for you during the setup (thank you Microsoft!). However with older versions, you still need to do these manually.
There are two configurations in Windows that can make a huge difference in performance if configured. They are:
- Windows Power Option that defaults to Balanced setting. Check my post from 2013 for more information on why this can hurt your performance. It is also very easy to fix, so just do it.
- Perform Volume Maintenance policy. This should always be enabled for the SQL Server service accounts. This allows data files to be initialized instantly speeding up many file operations. Read more about this from the MSDN article in here.
SQL Server configuration
- Cost threshold for parallelism. This is the cost after which, in servers that have multiple CPUs, queries will become parallelized. The default is 5, which tends to be quite low value. I would set this value to minimum of 40.
- Max degree of parallelism. This is used for configuring the number of CPUs that can be used to run a parallel queries. The default setting is 0, which means that queries running parallel can use all the CPUs you have available. In a servers that have high number of CPUs, this usually has a negative effect on performance. A good rule of thumb for setting this value is: The number of physical cores in a single socket, but no more than 8.
- Max server memory. This option is used to set the maximum amount of memory your SQL Server can use. After SQL Server 2012 this limit also includes Multi-Page and CLR allocations. A good maximum value is whatever leaves your OS enough memory to run all the other processes you have there. Usually 4GB is quite enough for most systems, but if you are running things that require a lot of memory (like Reporting Services, that you should never run on the database server!) you might need to leave more.
- Min server memory. This option is often overlooked, but important. It’s the amount of memory SQL Server will try to have available for you under any circumstance. Determining good value for this is a bit more complicated, I usually go with what I think is the absolute minimum memory required by the instance.
- Remote admin connections. This option is used for enabling Dedicated Admin Connections (DAC). If for some reason your server becomes unable to accept regular client connections (for example after running out of system resources), it is very like that you can still connect through DAC as it has a separate resource pool it uses. Also if you lose all your administrator passwords, the only way to reset them is using DAC.
Other SQL Server configurations
We are almost done for the minimum configuration, there are however few more things that should be done. First there is a small number of trace flags I would recommend enabling for all instances.
- 1117. This trace flag tells SQL Server to grow all the files equally. This is quite important especially when you configure multiple TEMPDB data files. This is not required for SQL Server 2016, it’s on by default as a database setting.
- 1118. This trace flag is used for ensuring full extents only. Again this is especially good for your TEMPDB. And as with previous trace flag, it’s also enabled by default on SQL Server 2016 as a database setting.
- 3226. This trace flag removes the successful backup messages from the SQL Server errorlog. Having only actual errors show up in an errorlog tends to make troubleshooting quite a bit easier.
After setting up the trace flags through the SQL Server Configuration Manager you need to restart the instance. There is also one more thing to do to complete our minimum SQL Server configuration, setting up TEMPDB. With SQL server 2016 this can be done during the setup in the Database Engine Configuration page (if you do setup manually). It will even offer proper number of data files for you, so all you have to do is to set the proper size for them.
- TEMPDB configuration. SQL Server defaults to creating one data and log file in default and they are both very, very tiny. My recommendation, add as many data files as you have cores in a single processors, up to total number of 8. Set size and autogrowth to equal for all of the data files. As for the initial size for the whole TEMPDB, I would take the size of the largest table, then add 50% on top of that to determine a starting size.
And there you have it, my recommendation for the minimum SQL Server configuration. If you are not sure how to configure your database servers, do at least these and you are off to a good start.