SQL-Performance recommendations

SQL recommendations

When the amount of data in a table exceeds the available RAM, consider the following:

Large amounts of data in a single table can also slow maintenance because many SQL indexes may be needed to improve search and sorting performance. Inserts, deletes, and updates all become slower when there are many indexes. You must balance the number indexes against database maintenance performance, and apply indexes judiciously. 

Disks

SQL data should live on the fastest available storage, with the tempDB data and Log files on SSDs and at least 400 MB/s concurrent throughput each on the Data, Log, and FT catalog volumes. This means that the server should be able to sustain a cumulative of 1600 MB/sec write. High-performance sequential write speed (cache optimized) may be desirable for backups, but it generally doesn't impact Relativity performance.

 For example, in the Fast Track requirements, Microsoft recommends 200-400 MB/s throughput per processor core for big data stored in data warehouses.

Optimize for ad hoc workloads option

Optimize for ad hoc workloads option is used to improve the efficiency of the plan cache for workloads that contain many single use ad hoc batches.

We recommend enabling the optimize for ad hoc workloads option on each SQL instance. Execute the following statement to enable this option:

SP_CONFIGURE 'show advanced options',1

RECONFIGURE

GO

SP_CONFIGURE 'optimize for ad hoc workloads',1

RECONFIGURE

GO

Max degree of parallelism option

We recommend the following MaxDOP settings:

https://help.relativity.com/9.3/Content/System_Guides/Environment_Optimization_Guide/Environment_optimization_guide.htm

https://help.relativity.com/9.3/Content/System_Guides/Environment_Optimization_Guide/Configuring_SQL_Server.htm#Optimize