SharePoint Best Practices – SQL Server Configuration

This article discusses configuring SQL Server for SharePoint and how to fine-tune the SQL Server in order to coordinate the two to play nicely together. For optimal performance, you should validate certain configurations by community best practices.

For each best practice, we provide a link to the SPDockit’s SharePoint best practices library website, where you can find all the SharePoint best practices we discuss here, as well as explanations and solutions for how to adjust certain values and configurations.

Tuning SQL Server for SharePoint

For consistent settings and best performance, configure the SQL Server as follows:

Max degree of parallelism

Maximum degree of parallelism (MAXDROP) is a SQL Server instance-level setting that defines the number of processors used for the execution of a query in a parallel plan.

By default, max degree of parallelism is 0. It has to be set to a supported value on the SQL Server instance and that value is 1. This means a single SQL Server process will serve each request when you limit the number of processors to be used in a parallel execution; otherwise, it causes SharePoint to underperform.

Refer to this SPDockit best practice article to learn how to disable the parallel execution on the instance via SQL Server Management Studio. As additional reading material, go to the Setting Server Configuration Options and the article about the max degree of parallelism option.

Disk allocation size

This is a SQL Server-related setting. The SharePoint Server uses SQL Server to store configuration and user data. Due to the internal SQL Server operations and the performance of the underlying I/O subsystem, it is best to set the block size to 64K. Also, the Partition Offset / Block Size calculation should result in an integer value. This is usually the best practice for storage; however, be sure to check your storage documentation to verify if this is really the rule to apply in your case.

Refer to the Disk Partition Alignment Best Practices for SQL Server article to learn more about its performance impact.


These are the best practices for configuring the tempDB to perform as fast as possible.

TempDB Size – Make sure the database disk space is allocated properly. The size of tempDB should be at least 10 percent of the largest database. You will encounter a slow response time if the tempDB needs to allocate additional space, and the best practice is to adjust that value in advance rather than wait for the autogrowth to happen.

Recovery Model TempDB – The recovery model for the tempDB should be set to SIMPLE. This means that there is no need  to back up the tempDB since the tempDB logs are automatically deleted once the SQL Server restarts.

TempDB Files – The SQL Server stores various data and the best practice is to keep the files on separate disks specially dedicated to store tempDB files. For best performance, the number of tempDB files should equal the number of CPU cores and each tempDB file should be set to the same size. Note that the database data and transaction log files should be kept separate.

TempDB Response Time – Check whether the tempDB’s response time is higher than 20ms. If that’s the case, it’s not a very good sign, because the higher the read/write time, the more performance issues it causes. So if the response time is higher for either read or write, you have a problem that you need to troubleshoot. The problem might be the disk, network, etc.

Refer to the TempDB collection of articles offering additional information for each of the aforementioned best practices.

sharepoint best practices dashboard


Model database is a type of SQL Server system database that serves as a template when creating new databases. The issue with the modelDB is that it needs to be configured properly—otherwise all the other newly created databases are going to have faulty settings. When this happens, and it tends to happen more often than it should, you need to fix each and every database setting manually on the SQL server itself. Now you see how this can create a problem, don’t you?

These are things to look out for when it comes to modelDB.

ModelDB Files Autogrowth – The autogrowth for the modelDB should be in megabytes, not percentages. It should also be set to a value higher than the default (the default value is 1MB) in accordance with your environment needs.

ModelDB Files Initial Size – The initial size should be higher than the default value. The default initial size of the model database data and log file is 8 MB, however you should set the value in accordance with your environment needs, expected amount of data, available disk space, expected number of content databases, etc. For example, if your SharePoint farm will host a small amount of data, you could adjust the initial size to 500 MB or 1 GB.

ModelDB Recovery Model – The recovery model for the model database should be set to FULL.

Refer to the ModelDB collection of articles explaining how to adjust certain settings in the SQL Server Management Studio. For additional information regarding the model database itself and other recovery models, read the Recovery Model Overview article by Microsoft.

Additional SharePoint best practices related to SQL Server

Content Database Autogrowth and database capacity – Here the autogrowth property should be in megabytes and best practices call for you set a higher value than the default one. You can go with 500MB for starters and later see how it effects your environment. Important factor to consider when scaling the environment and setting the limits are defined Service Level Agreements (SLAs), especially Recovery Point Objective (RPO) and Recovery Time Objective (RTO). What you should aim for is ensuring the best system performance. However, keep in mind that you shouldn’t allow content databases to grow beyond 200GB. In cases when you have content databases as large as 4 TB, you should use the disk sub system performance of 0.25 IOPS per GB. Then there’s the case of content databases with no explicit size limit, but you have to keep them under strict requirements.

Refer to this Microsoft article on Storage and SQL Server capacity planning and configuration (SharePoint Server 2016) for more details.

Database Files – The basic items to differentiate are the data files—or the so called .mdf files—and log files, the ones with the .ldf extension. By default, both of these file types are stored on the same drive, which is not a best practice. You should separate the data and log files by placing them on different physical drives. To change the location of existing data and log files, run the ALTER DATABASE command with the MODIFY FILE option.

SQL Server Memory – There are five server configuration options; however, the two most important are min server memory and max server memory. The default setting for min server memory is 0, and the default setting for max server memory is 2147483647 MB. By default, SQL Server can change its memory requirements dynamically based on available system resources. It’s always a good practice that you set the min server memory to a reasonable value based on the size and activity of your SQL Server. This configuration option will then establish the upper and lower limits, which will then be used as the Database Engine workload increases according to the max server memory setting. This means that the server won’t acquire more memory than the value you specified. Leave at least 1GB ram so that the OS can run. If you have SQL Integration Services running, configure the values accordingly because you immediately need more than 1GB.

Refer to the Optimizing Server Performance Using Memory Configuration Options article to learn more about tweaking SQL Server memory options.

SQL Server Alias – Using an alias has become a SharePoint best practice since it is the easiest, and for now the only, way to switch the database instance. Be sure to configure the SQL Server alias before deploying SharePoint. Create an SQL Server name that is additionally configurable and you are good to go. The problem is that there is no possibility to cross over to another server if you haven’t used an alias. This becomes a serious problem if you need to change the SQL Server instance or if it goes down, or maybe even if you want to upgrade the SQL Server from an older version.

Subscribe to our Newsletter

Related Posts