DB2 PROCEDURE Performance Tuning

I am just done with an assignment where I had to tune the performance of DB2 procedures that were performing very bad.  While in one environment it was taking close to 10 minutes, in another environment it was taking close to 7 hours to complete (same set of data, better server configuration in latter scenario, etc.)

Below are few checks that I did (and that even you might probably do as well) to improve the performance of the DB2 procedure:

  • Checked for CPU Utilization, Disk I/O, Network I/O, Memory Utilization on the server.  It was NORMAL
  • Checked for Disk space on the server.  There was plenty of space.
  • Ran the statistics for all the tables involved.  No improvement in performance.
  • Compared the 'Access Paths' for the SQLs used in the procedure in both the environments. The server with better configuration was having a better access path (had lesser 'timerons').  However, the queries were performing very bad even though it had a better access path. 
  • Applied indexes on few columns based on what were being used by the queries present in the procedure.  The Access Paths improved further, and the query execution times of the individual queries too improved.  However, the DB2 procedure was still performing slow, same as before applying the index. 
After researching for a long time, found out that it is because of the STATIC SQLs that I had used in my procedure.  

When a SQL is executed, DB2 parses the SQL, compiles it and determines the access paths using various information including the statistics. For STATIC SQLs in a SQL stored procedure, this happens when the stored procedure is compiled (created) and the access path information is stored in the database. Then the stored procedure uses the fixed access paths every time it runs.

As for DYNAMIC SQLs, the access path is always calculated when the SQLs actually run.

Whether a SQL is STATIC or DYNAMIC depends on how it is written in the stored procedure.  Here is more information on the Differences between STATIC and DYNAMIC SQL.

I had few STATIC SQLs in my procedure, and few DYNAMIC ones, and the bad performance was due to these STATIC SQLs using old access paths (before indexes were created on few columns) even though the queries were tuned to perform better.

There are quite a number of ways to compile the procedures once again, so that it chooses the best access path as per the latest information that is available:
  • Drop the procedures, run statistics on the tables and re-create the procedures
  • Run the statistics on the tables, and re-bind all the packages of the database (using 'db2rbind').  Check out the link on DB2 Packages: Concepts, examples and common problems, and search for 'db2rbind'.
Hence it is very important that the queries that are going to be used in the procedure are tuned (by applying necessary performance aiding constructs like indexes, etc), use them in the procedure and then create the procedure. 

There might be many more reasons why a procedure might not be performing good, however thought of penning down my experience and the trick that solved the problem for me.

QE-DEF-0068 Unable to connect to at least one database during a multi-database attach to 1 database(s) in: testdatasourceconnection

While creating a Data Source connection in Cognos, there might be multiple errors that we might be faced with.  Below are few ones that I encountered:

QE-DEF-0285 Logon failure.
QE-DEF-0323 The DSN(ODBC)/serviceName is invalid. Either the DSN is missing or the host is inaccessible.
QE-DEF-0068 Unable to connect to at least one database during a multi-database attach to 1 database(s) in: testdatasourceconnection 

UDA-SQL-0031 unable to access the 'testdatasourceconnection' database
UDA-SQL-0532 Data Source is not accessible:<name>


I was trying to connect to DB2 database from Cognos, and both of them resided on different servers.   The DB2 client was installed on Cognos Server, so that it could connect to the DB2 server.  There was ODBC DSN created in Cognos Server. 

There are multiple posts on the net that give various solutions to fix the above error.  However, none of them fixed my issue.  Later, I found that the DB2 ODBC driver that was installed on Cognos Server was a 32 bit ODBC driver.  The operating system on which it was installed was Windows Server 2008 R2 64 bit.  

In such a case, we SHOULD NOT use START --> PROGRAMS --> ADMINISTRATIVE TOOLS --> DATA SOURCES to create the ODBC DSN as it points to c:\windows\system32\odbcad32.exe.

To manage a data source that connects to a 32-bit driver under 64-bit platform, use c:\windows\sysWOW64\odbcad32.exe. To manage a data source that connects to a 64-bit driver, use c:\windows\system32\odbcad32.exe. 

If you use the 64-bit odbcad32.exe to configure or remove a DSN that connects to a 32-bit driver, you will receive the following error message:
The specified DSN contains an architecture mismatch between the Driver and Application

Publish Xcelsius SWF files to Infoview

All of us who are familiar with Xcelsius and Infoview know that the Xcelsius dashboards can be deployed onto Infoview as SWF files. 

In case you have the .XLF file (Xcelsius source file), the process is pretty straight forward:
  1. Open the .XLF file in Xcelsius
  2. Go to File --> Export --> SAP BusinessObjects Platform
  3. Enter the login information for the BO environment, choose the folder where the dashboard needs to be deployed and then publish it.  
However, if you have the .SWF (Shock Wave Flash) file of the dashboard without having the .XLF file, you can still go ahead and add that file onto Infoview platform:
  1. Login into Infoview
  2. Click on "Add a Local Document"
  3. Select your .SWF file using "Browse".  Select File Type as 'Other'.
  4. For MIME TYPE, type application/x-shockwave-flash
  5. Expand the General properties.  Append .SWF at the end of the title
  6. Save it.  The file type will be shown as Agnostic, but that really does not matter.  The file can still be opened from Infoview.

Converting rows into a single cell (comma separated list) in WebI


This post describes steps on how to concatenate data in multiple rows into a single cell, as a comma separated list.  


I had read a post on this in forumtopics long back, but could not find it when I needed it now.  Hence, I experimented on this for a while based on whatever I remembered and found out the way once again.  So, I thought of jotting it down here before I forget it, and hopefully someone finds it useful when they are searching for a similar requirement!!

Let us take an example from eFashion universe.  I am picking [LINES] and [CATEGORY] objects from the Product class for my example:


In the result set, I am restricting the rows (for simplicity) for LINES = Dresses, Jackets and Leather.

For the above data, what needs to be achieved would look something like below:

A method on how this can be achieved at report level in WebI is explained below:
  • Create a variable [VAR Max Category] = Max([Category]) In ([Lines]) to find the maximum category, with respect to each Line
  • Create a variable [VAR Concat Category] = [Category] +", "+ Previous(Self), to concatenate the category value with it's previous row value
The records would look like this:

  • Create a 3rd variable [VAR Max Concat Category] = [VAR Concat Category] Where ([Category]=[VAR Max Category]).  This variable basically gives only those rows of [VAR Concat Category], where the value of the category is maximum for that particular Line
On replacing VAR Concat Category with VAR Max Concat Category column in the above screenshot, we get:

I have displayed Category and Var Max Category only for illustration purposes.  It is actually not required to be present / displayed in the report.  On removing those two columns:

Now, the only pending thing to do is to display those categories that belong to their Lines, against each of their respective Lines.  The below formula would do the trick:
  • Create a variable [VAR Category] =If(IsNull(Previous([VAR Max Concat Category]));Substr([VAR Max Concat Category];1;Length([VAR Max Concat Category])-2);Substr([VAR Max Concat Category];1;Pos([VAR Max Concat Category];Previous([VAR Max Concat Category]))-3))
Basically, the formula checks if the previous value of VAR Max Concat Category is NULL.

If TRUE, then it just displays VAR Max Concat Category.  
In the above screenshot, this scenario occurs for LINE = Dresses.  There is no value for VAR Max Concat Category prior to LINE = Dresses and hence it is NULL.

If FALSE, then previous row value of VAR Max Concat Category is removed from the present row value of VAR Max Concat Category.  
This scenario occurs for LINE = Jackets and LINE = Leather (and all the rows that may occur below).  This is done with the help of POS (for finding the position where the previous value of VAR Max Concat Category starts), and SUBSTR functions

This is the final output what we intended to achieve at the beginning of this post:


Default Sort in BO Explorer

BO Explorer, as we know by default sorts the data based on values - highest to lowest.  This might be the preferred way in few scenarios, however there might be requirements not to have the data sorted "based on values" by default.  Instead, it should be in alphabetical order (or chronological order in case of DATE related dimensions).

This default sort setting can be changed in the Manage Infospace screen.  Navigate to Objects tab --> Facets.  Each Dimension (or Facet as we call it in BO Explorer) will have a sort option defined with an option to change it.  On clicking the drop-down, there are various sort options displayed (see image below for example).


By default, "Largest to Smallest" option would be selected - meaning the facet would be sorted based on the measure value, largest to smallest.  This can be changed to any other type, based on the requirement.