Database Performance Monitoring an Analysis
Compare a statspack report (above left) to an Average Active Sessions graph with a break down by top SQL , Event and Session (in DB Optimizer). The statspack report, viewed at this size, is impossible to read. Sure, if I had the report itself , I could say some things pretty fast because I've seen 100s if not 1000s of statspacks type reports (remember utlbstat/estat? plus I had my own statspack before statspack - it's out there on the internet somewhere I'm sure) but imagine explaining this to a new user. Now look at the graph on the right. I can say what's happening faster and understand what's happeing in a way that is not even possible with statspack plus I can explain it in minutes to a brand new user. I can see 4 spike areas (big blue, no pun intended on the left, then a dark brown/red in the middle, then a green one, then a red one on the far right) - the big blue is an I/O bottleneck and the big green is a CPU bottleneck. The other two smaller areas are some other waits. I can see that all the IO came from basically one query - because I cans see in the top sql to the bottom left of the graph, that the first query in the list has the biggest blue bar of all the queries. In the next table to the right of TOP SQL is TOP EVENTS. In TOP EVENTS I can see that most of time is spent on CPU though good bit is spent on IO (as seen in the middle bottom - green CPU is the biggest event followed by blue - IO). All other waits are relatively minor percent (except at the spikes). I can see load is spread evenly over the users (bottom right). Also super handy is the MAX CPU line. The Max CPU line is the number CPUs on the machine. Whenever load goes above this there is a bottleneck.
For more info on the history and ideas of Average Active Sessions and the load chart see
SQL Tuning and Execution Paths
What I'm working on right now is making Explain Plans easier. Compare the textual explain to the grpahic one. The graphic is so much easier. At the font size the text version is imposible top read. Even if it was big enough, the text would still take longer and more brain CPU cycles to understand. The graphic on the right tells me the most important information to optimally execute the query. The query will be optimally executed by starting on Orders, since it is the only table with a filter on it represented by both the little green F 9we will increase the size in the next version of DB Optimizer to make it more obvious) and the highlighted yellow "0.72". The value "0.72" is the filter ratio, or the fraction of rows returned after applying the query filters on this table. The filter "0.72" is the only filter ratio in the diagram, but if there was another filter ratio, we would pick the most selective one as the table to start are query, but in this case there is only one filter. From that filter we always join down into mater tables (master tales have the cross end of the join line - the crows feet represents detail table). By joining into a master table we keeps the same size of the "running row set" . The "running row set" the internal number of rows that are carried from one join to another as we join into each table in the execution plan. Can you imagine having carry boxes to 3 people in the office but depending on the order you visited the 3 people you either got to drop off boxes or you had to pick up boxes. Then the easiest most effortless path would be the path where you got rid of the most boxes first and picked up the least number of extra boxes along the way. The number of boxes you have to carry is equivalent to the running row set. For example in the above executions paths that are being compared, the one on the left does a join between ORDER_ITEMS and CUSTOMERS but there is no join so instead of getting rid of boxes we multiply - we do a Cartesian join. Thus this plan is not worth pursuing. The second plan is not worth pursuing because it joins from a master into a detail again increasing running row count prematurely - not as bad as a Cartesian but still worse than joining down to the master.
We can extend the graphic with other relevant info such as "io" for index only access "i" for index access, no value for "full table scan"
Red highlighted numbers are the two table join sizes including filters, ie the run row count over the indicated two table join.
The green highlighted number is the table size:
Not only does the diagram give me the information I want to see in order to determine the optimal path but I can compare this path with any other path side by side and see clearly and quickly the differences.
For More info on graphic SQL see:
Graphics by themselves without a proper approach lack much value
For example, here is graphical representation of the execution paths for for the same query shown above, but the graphics add little value to the problem of understanding the execution plan or the differences between the two plans
For the example of Database Performance Monitoring, here are a couple of examples that show a lot of information about the database operation but fail to clearly elucidate the main questions about the database - how is it performing? what is the load? where is the load coming from ? and are there any bottlenecks: