Comparing OBIEE Usage Tracking with NQSQUERY.LOG

Post date: 14-Apr-2010 06:27:20

Usage Tracking isn’t something that’s installed by default with OBIEE, you have to run a script to create a tracking table called S_NQ_ACCT in your database and then register the table in the repository and the NQSCONFIG.INI file. You can log statistics to a file instead, but direct insertion into a table is the recommended approach as this allows better analysis and you can also batch inserts up to reduce the load on the server. Something that I think is new in 10.1.3.4 is a subdirectory under the $ORACLEBI/server/sample directory called “usagetracking”, this contains a number of extra table creation scripts, a repository RPD file that you can import and some web catalog files to provide instant reports over your usage tracking data, something that I think first shipped with the BI Applications and now ships with OBIEE from this release onwards.

I don’t know if it was just me, but it actually took me quite a lot of time to get Usage Tracking up and running though. FIrst off, there are some extra tables to create, including a view (S_NQ_LOGIN_GROUP) that is required for the various reports but only seems to ship with an SQL Server script – in fact the various repository files and scripts look like they were originally developed for SQL Server, which is the database I often see some of the original nQuire product managers demoing OBIEE with, I guess historically this is the database many of the demos were developed for and that’s reflected itself in the Usage Tracking samples that ship with 10.1.3.4. Once you’ve got the tables developed you need to add a couple of entries to the NQSCONFIG.INI file to turn on usage tracking, tell the BI Server to write the log entries to the database and to specify which database tables to insert to.

The key entries are:

[ USAGE_TRACKING ]  ENABLE = YES;  DIRECT_INSERT = YES;  PHYSICAL_TABLE_NAME = <name of physical S_NQ_ACCT table, as located in the physical model>  CONNECTION_POOL = <name of connection pool for log writes, should be separate to other connection pools>  BUFFER_SIZE = 10 MB;  BUFFER_TIME_LIMIT_SECONDS = 5; 

The repository also took a while to import properly; it makes use of old-style time series functionality and I had to recreate all the derived measures (change since last year, percentage change since last year and so on) using the new “Time Dimension”-based time series functionality that comes with 10.1.3.4. In the end though I managed to get it all up and running, with a new physical model, business model and subject area within my repository so that I could now start recording and analyzing usage tracking information

Once you get the repository up and running, and the web catalog imported, you can take a look at the pre-defined reports. Here’s one that lists out the top 10 longest running queries.

You can also of course create your own reports, and add them to a Usage Tracking dashboard, which you can use to monitor the health of your system, highlight problem reports, see if response time is deteriorating over time and so on.

So how does this compare to the log file analysis that I blogged about the other day? Well, putting my Oracle database hat on, analyzing queries via the log file is more like performing an SQL trace, whereas the usage tracking feature is more like using Statspack, or an automated session logging tool such as AWR or ADDM. The NQSQUERY.LOG file gives you more information than usage tracking, including the actual physical SQL or MDX sent to the underlying databas, details of how federated queries are broken down into individual database calls, plus you get the logical execution plan for a query together with individual row counts at the various stages of the query. Like SQL tracing though, you need to have turned logging (or tracing) on before you can analyze a query, and as Oracle’s recommendation is only to turn logging on for diagnostic purposes, it’s likely that it won’t be on when a user calls you up and tells you that the system is slow. Usage Tracking, in contrast, like Statspack or AWR/ASH is likely to be on at all times, and whilst it won’t tell you the details of how the query executed, you can use to it generate general trend-type reports in order to spot when a problem is developing based on deteriorating performance. The Query log appeals to me as I like to look at the internals, but for most people the extra information available beyond logging level 3 (the logical execution plan and the intermediate row counts) won’t be relevant and they’re mostly interested in whether query performance has spiked recently.

So, from a diagnostics and problem highlighting point of view, my recommendation would be to have Usage Tracking turned on and logging query performance as a background task, potentially using an iBot to send an alert if key queries start to come back with below-average response times. Then, if you’re looking to investigate a particular query, or perhaps if you’re interested in how the BI Server handles a particularly complex federated or other query, the query log is a useful equivalent of the Oracle database SQL trace file and gives you the ability to see how the BI Server goes about satisfying a user query.

1. First we run the script:

C:\OracleBI\server\Schema\SAACCT.MSSQL.sql

DROP TABLE S_NQ_ACCT;

create table S_NQ_ACCT

(

USER_NAME varchar(128),

REPOSITORY_NAME varchar(128),

SUBJECT_AREA_NAME varchar(128),

NODE_ID varchar(15),

START_TS datetime,

START_DT datetime,

START_HOUR_MIN char(5),

END_TS datetime,

END_DT datetime,

END_HOUR_MIN char(5),

QUERY_TEXT varchar(1024),

SUCCESS_FLG NUMERIC(10,0),

ROW_COUNT NUMERIC(10,0),

TOTAL_TIME_SEC NUMERIC(10,0),

COMPILE_TIME_SEC NUMERIC(10,0),

NUM_DB_QUERY NUMERIC(10,0),

CUM_DB_TIME_SEC NUMERIC(10,0),

CUM_NUM_DB_ROW NUMERIC(10,0),

CACHE_IND_FLG char(1) default ‘N’ not null,

QUERY_SRC_CD varchar(30) default ” not null,

SAW_SRC_PATH varchar(250) default ” not null,

SAW_DASHBOARD varchar(150) default ” not null,

SAW_DASHBOARD_PG varchar(150) default ” not null,

PRESENTATION_NAME varchar(128) default ” not null,

ERROR_TEXT varchar(250) default ” not null,

RUNAS_USER_NAME varchar(128) default ” not null,

NUM_CACHE_INSERTED NUMERIC(10,0) default null,

NUM_CACHE_HITS NUMERIC(10,0) default null

);

CREATE CLUSTERED INDEX S_NQ_ACCT_M1 ON S_NQ_ACCT

(START_DT, START_HOUR_MIN, USER_NAME);

CREATE INDEX S_NQ_ACCT_M2 ON S_NQ_ACCT

(START_HOUR_MIN, USER_NAME);

create index S_NQ_ACCT_M3 on S_NQ_ACCT

(USER_NAME);

2. Secondly we run the scripts located in C:\OracleBI\server\Sample\usagetracking\SQL_Server_Time

to create the tables S_ETL_DAY and S_ETL_TIME_DAY

– Target DBMS : SQL_Server

– TABLE: S_ETL_DAY

DROP TABLE S_ETL_DAY

;

CREATE TABLE S_ETL_DAY(

DAY_DT datetime null,

CAL_MONTH int null,

DAY_OF_MONTH int null,

DAY_NAME char(12) null,

DAY_OF_YEAR int null,

DAY_OF_WEEK int null,

CAL_WEEK int null,

MONTH_NAME char(15) null,

CAL_YEAR int null,

PER_NAME_MONTH varchar(50) null,

PER_NAME_QTR varchar(20) null,

FSCL_YEAR int null,

FSCL_QTR char(5) null,

DAY_AGO_DT datetime null,

WEEK_AGO_DT datetime null,

MONTH_AGO_DT datetime null,

QUARTER_AGO_DT datetime null,

YEAR_AGO_DT datetime null,

ROW_WID int null

)

;

– Target DBMS : MS SQL Server

– TABLE: S_ETL_TIME_DAY

DROP TABLE S_ETL_TIME_DAY;

CREATE TABLE S_ETL_TIME_DAY(

TIME_SLICE datetime NULL,

HOURS int NULL,

MINUTES int NULL,

HAGO datetime NULL,

MIN_SEQ smallint NULL,

HOUR_MIN varchar(5) NULL

)

;

create view NQ_LOGIN_GROUP as

select distinct USER_NAME as LOGIN, USER_NAME as RESP

from S_NQ_ACCT;

3. Populate data into the tables S_ETL_DAY and S_ETL_TIME_DAY

using the scripts

C:\OracleBI\server\Sample\usagetracking\SQL_Server_Time\SQLServer_create_nQ_Clock.sql

C:\OracleBI\server\Sample\usagetracking\SQL_Server_Time\SQLServer_create_nQ_Calendar.sql

4. Changing the configuration file C:\OracleBI\server\Config\NQSConfig.INI

related to the section [ USAGE_TRACKING ]:

[ USAGE_TRACKING ]

ENABLE = YES;

//==============================================================================

// Parameters used for writing data to a flat file (i.e. DIRECT_INSERT = NO).

//

STORAGE_DIRECTORY = “”;

CHECKPOINT_INTERVAL_MINUTES = 5;

FILE_ROLLOVER_INTERVAL_MINUTES = 30;

CODE_PAGE = “ANSI”; // ANSI, UTF8, 1252, etc.

//

//==============================================================================

DIRECT_INSERT = YES;

//==============================================================================

// Parameters used for inserting data into a table (i.e. DIRECT_INSERT = YES).

//

PHYSICAL_TABLE_NAME = “OBI Usage Tracking”.”Catalog”.”dbo”.”S_NQ_ACCT” ; // Or “”.”".”" ;

CONNECTION_POOL = “OBI Usage Tracking”.”Usage Tracking Writer Connection Pool” ;

BUFFER_SIZE = 10 MB ;

BUFFER_TIME_LIMIT_SECONDS = 5 ;

NUM_INSERT_THREADS = 5 ;

MAX_INSERTS_PER_TRANSACTION = 1 ;

5. Merge the existing UsageTracking RPD into your RPD

6. Merge the existing UsageTracking web catalog

7. Others as mentioned in this article

For any newbies,like me:

1. Import rather then merge

—————————

I’d suggest you use the Import function rather than merge (which I couldn’t for the life of me get to work). Ignore the warning about it being deprecated when you fire it up, it works just fine.

You need to

a) Copy in the usage repository from the ..\server\sample\usagetracking folder to ..\server\Repository (since it doesn’t use a windows common dialog control, but only lists the repositories in this folder, or is it those in the same folder as your open repository… whatever…)

b) Open up your repository offline (I seem to recall that the import option is disabled whatever you do if you open it online, so don’t get stuck down that path).

c) Select Tools|Options, then the General tab and check “Allow import from the repository” d) Select File|Import|From Repository

e) Select UsageTracking.rpd, the password is blank.

f) Select the Catalogs drop down

g) Select the “Usage Tracking” catalog and press “Add with Children”

h) Press Next and then finish

2. Set up the connections

————————-

I know it’s obvious, but I thought I might as well mention it while I’m about it, especially since if you are lazy like me (and you saw all those extra projects that you get faced with, thought there was maybe a whole load of stuff that the developers hadn’t bothered to take out) and only thought you need set it for the “Usage Tracking Writer Connection Pool” referenced in the config file. Well, that connection pool’s just for the write. The read uses the plain old “Connection Pool”. So set them both up, specifying your Data Source Name, User Name and Password.

Then, all that’s left to do is press save and you’re off – Log in, run some reports and then check your usage stats in answers. Marvellous, heartening stuff.