CPU -----SQL SERVER
WITH DB_CPU_Stats AS(SELECT DatabaseID, DB_Name(DatabaseID) AS [DatabaseName], SUM(total_worker_time) AS [CPU_Time_Ms] FROM sys.dm_exec_query_stats AS qs CROSS APPLY (SELECT CONVERT(int, value) AS [DatabaseID] FROM sys.dm_exec_plan_attributes(qs.plan_handle) WHERE attribute = N'dbid') AS F_DB GROUP BY DatabaseID)SELECT ROW_NUMBER() OVER(ORDER BY [CPU_Time_Ms] DESC) AS [row_num], DatabaseName, [CPU_Time_Ms], CAST([CPU_Time_Ms] * 1.0 / SUM([CPU_Time_Ms]) OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPUPercent]FROM DB_CPU_Stats WHERE DatabaseID > 4 -- system databasesAND DatabaseID <> 32767 -- ResourceDBORDER BY row_num OPTION (RECOMPILE);
After running this query with my customer, we saw that there were about five queries that were noticeably higher in total CPU than the rest.
Reviewing Execution Plans:
Once we identified the highest CPU consumers, we started reviewing their execution plan by clicking on the link in our results.** There are a number of items to look for in any execution plan that could indicate where performance can be improved:
High cost operations
Index scans
Multiple executions of scans/seeks
Bookmark Lookups
Operations with very high rowcount
Sort operations
Implicit conversions
Estimated rows/executions that do not match actual rows/executions (which could indicated out of date statistics)
In this case, we found the execution plans for each didn’t have obvious red flags, like expensive scans. So, we reviewed the index seeks with higher costs. We reviewed the queries to see what columns were used in the WHERE and JOIN clauses. Using sp_help against the tables, we looked at the existing indexes and found that the indexes seemed to support these queries relatively well.