Db2 Olap Server v8.1
Db2 Olap Server v8.1
Corinne Baragoin
Luciana Dongo Alves
Jakob Burkard
Ulrich Guldborg
Jo Ramos
Paola Rodriguez
ibm.com/redbooks
International Technical Support Organization
November 2002
SG24-6599-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page xxi.
This edition applies to IBM DB2 OLAP Server Version 8, Release 1 and to IBM DB2 OLAP
Analyzer Analysis Server Version 8 Release 1.
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi
Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxvii
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Contents v
Chapter 6. Parallel calculation, data load, and export . . . . . . . . . . . . . . . 223
6.1 Performing parallel calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.1.1 Understanding parallel calculation . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.1.2 Parallel calculation architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
6.1.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.1.4 Enabling parallel calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
6.1.5 Identifying concurrent tasks for parallel calculation. . . . . . . . . . . . . 234
6.1.6 Running parallel calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
6.1.7 Parallel calculation performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
6.1.8 Monitoring using the log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
6.1.9 Estimating the size of a database calculation . . . . . . . . . . . . . . . . . 248
6.2 Performing parallel load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.2.1 Understanding the parallel load . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.2.2 Enabling parallel load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
6.2.3 Running parallel data load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
6.2.4 Monitoring using the log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6.2.5 Parallel load performance: Defining the right parameters . . . . . . . . 261
6.2.6 Tools to monitor parallel calculation and load . . . . . . . . . . . . . . . . . 264
6.3 Performing parallel export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
6.3.1 Running parallel export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
6.3.2 Monitoring using the log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
6.3.3 Parallel export performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
6.3.4 Exporting files larger than 2 GB . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Contents vii
Appendix C. Enterprise Services sample programs . . . . . . . . . . . . . . . . 489
C.1 Copy application/database using CopyOLAP . . . . . . . . . . . . . . . . . . . . . 490
C.2 Sample runsamples.cmd script. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
Figures xi
6-1 Serial calculation and parallel calculation . . . . . . . . . . . . . . . . . . . . . . 225
6-2 Parallel calculation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6-3 Sample.Basic outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
6-4 Calculation process using Application Manager . . . . . . . . . . . . . . . . . 238
6-5 Integration Server data load. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
6-6 Verifying the database size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6-7 Data load process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6-8 Parallel load process in a single CPU . . . . . . . . . . . . . . . . . . . . . . . . . 253
6-9 Data load using Application Manager . . . . . . . . . . . . . . . . . . . . . . . . . 257
6-10 Data load using ESSCMD command interface . . . . . . . . . . . . . . . . . . 258
6-11 Integration Server data load. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
6-12 VMSTAT example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
6-13 Hit ratio information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
6-14 Task Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
6-15 MMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
6-16 Parallel export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
6-17 MaxL export syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7-1 Administration Services architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 280
7-2 Administration Services: enterprise scenario . . . . . . . . . . . . . . . . . . . . 283
7-3 Administration Services: mixed scenario . . . . . . . . . . . . . . . . . . . . . . . 284
7-4 Administration Services: stand alone scenario . . . . . . . . . . . . . . . . . . 285
7-5 Uing Administration Services to create an OLAP Server user . . . . . . . 287
7-6 Using Administration Services to create an OLAP Server group . . . . . 289
7-7 Using Administration Services to delete and rename users or groups 290
7-8 Editing group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
7-9 Adding a user to a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7-10 Granting permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7-11 Migrating users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
7-12 Migrating users to OLAP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
7-13 User migrated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
7-14 Copy user. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
7-15 Propagating password in a cluster environment. . . . . . . . . . . . . . . . . . 299
7-16 Propagating the password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
7-17 Propagating password dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
7-18 Creating a new user using LDAP external authentication.. . . . . . . . . . 303
7-19 DB2 OLAP Server properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
7-20 Administration Services - comparing DB2 OLAP Server properties . . 308
7-21 Managing sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
7-22 Logging off users — example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
7-23 Logging off users result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
7-24 Administration Services managing locks . . . . . . . . . . . . . . . . . . . . . . . 311
7-25 Creating application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
7-26 Creating, renaming or deleting a database. . . . . . . . . . . . . . . . . . . . . . 316
Figures xiii
7-70 User ibmuser created. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
7-71 Creating users in the navigation panel. . . . . . . . . . . . . . . . . . . . . . . . . 376
7-72 Create user window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
7-73 New user in the navigational panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
7-74 Administration Services single sign-on . . . . . . . . . . . . . . . . . . . . . . . . 379
7-75 Associating OLAP servers to users . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
7-76 OLAP Servers window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
7-77 Adding a OLAP Server to a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
7-78 Enterprise View for user ibmuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
7-79 Editing user properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
7-80 Administration Services Enterprise View . . . . . . . . . . . . . . . . . . . . . . . 385
7-81 New custom view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
8-1 DB2 OLAP Server caches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
8-2 Direct I/O versus buffered I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
8-3 Migrating data: first option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
8-4 Enter servers information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
8-5 Select specific application to migrate . . . . . . . . . . . . . . . . . . . . . . . . . . 399
8-6 List of available applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
8-7 Run the migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
8-8 Migration results display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
8-9 Connect.log example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
8-10 Results.log example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
8-11 Data.txt example: user section. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
8-12 Data.txt example: application section . . . . . . . . . . . . . . . . . . . . . . . . . 404
8-13 Secmgr.log example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
8-14 Migrating data: second option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
8-15 Enter server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
8-16 Select specific application to migrate . . . . . . . . . . . . . . . . . . . . . . . . . . 407
8-17 List of available applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
8-18 Run the migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
8-19 Migration results display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
8-20 Migrating data: third option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
8-21 Enter destination server information . . . . . . . . . . . . . . . . . . . . . . . . . . 410
8-22 Select specific application to migrate . . . . . . . . . . . . . . . . . . . . . . . . . . 410
8-23 Run the migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
8-24 Migration results display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
8-25 Custom Defined Function Manager main screen. . . . . . . . . . . . . . . . . 430
8-26 Custom Defined Function Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
8-27 Filling in the Custom Defined Function Editor . . . . . . . . . . . . . . . . . . . 432
8-28 Displaying the CDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
8-29 Formula editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
8-30 Function templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8-31 Formula created formula in the outline . . . . . . . . . . . . . . . . . . . . . . . . 437
Figures xv
xvi DB2 OLAP Server V8.1: Using Advanced Functions
Tables
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
The following terms are trademarks of International Business Machines Corporation and Lotus Development
Corporation in the United States, other countries, or both:
ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United
States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure
Electronic Transaction LLC.
IBM DB2 OLAP Server V8.1 (DB2 OLAP Server throughout this IBM Redbook),
based on Hyperion Essbase Server V6.5, incorporates significant improvements
and integrates the whole enterprise dimension, with multiple OLAP servers
(IBM DB2 OLAP Server or Hyperion Essbase Server) to control and manage
your data.
This book positions the new advanced analytics, enterprise and administration
functions, and other features, so you can understand and evaluate their
applicability in your own enterprise environment. It provides information and
examples to help you to get started prioritizing and implementing the new
advanced functions.
The team: Paola, Ulrich, Jakob, Corinne, Jo, Luciana (from left to right)
We would like to especially thank the following people for their specific
contributions to this project, for testing and writing:
Julia L Hirlinger
Lori Norton
Gary Robinson
Marty Yarnall
IBM Silicon Valley Lab
Cintia Y Ogura
IBM DB2 Advanced Technical Support
Thanks to the following people for their help to this project by providing their
technical support and/or reviewing this redbook:
Melissa Biggs
Aaron Briscoe
Larry Higbee
Gary Mason
Gregor Meyer
John Poelman
Allan Wei
Preface xxv
Chris Yao
IBM Silicon Valley Laboratory
Vaishnavi Anjur
Hematha Banerjee
Michael Gnann
Bob Jacobson
Al Marciante
Vince Medina
Sujata Shah
Eric Smadja
Richard Sawa
Hyperion Solutions
Doreen Fogle
IBM WW DB2 OLAP Server Technical Sales Support
Ian Allen
IBM EMEA Technical Sales Business Intelligence Solutions
Jaap Verhees
Business Partner from Ordina, Netherlands
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you'll develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Preface xxvii
xxviii DB2 OLAP Server V8.1: Using Advanced Functions
Part 1
Part 1 Introduction
In this part of the book, we take you through an overview of the DB2 OLAP
Server V8.1 new advanced functions and describe how to combine the different
components.
Hybrid Analysis can retrieve data from a relational database as if the data
physically resides on the cube storage. It can construct multiple SQL queries
dynamically and automatically allowing total flexibility to access the relational
database from DB2 OLAP Server multidimensional database.
If you have applications that analyze very large volumes of data, only a smaller
physical cube needs to be loaded and precalculated in DB2 OLAP Server while
the lower, more detailed level of data can be accessed dynamically at execution
time. This avoids taking a long time to load and calculate the multidimensional
database, and allows you to use the efficiency of mass data scalability that the
relational databases offer.
One of the most important issue will be to design the database and define how
much aggregation is done dynamically in the database.
Hybrid Analysis
Manager
OLAP databases can often hold so much data that it is impossible to search all
areas of the cube for useful information. OLAP Miner helps with this data
exploration problem using a data mining algorithm developed at IBM Laboratory
to search cubes for extraordinary values or deviations. A deviation is defined as
a value in the cube that differs significantly from its expected value.
Important: OLAP Miner is an IBM DB2 OLAP Server feature that is supplied
free. In DB2 OLAP Server version 8.1, the OLAP Miner feature is packaged as
a separate installation.
Enterprise Services
Enterprise Services gives the capability to group identical DB2 OLAP Server
version 8.1 databases using them as a single resource. These databases can be
in multiple machines and the Enterprise Services serves users applications as
one logical unit.
A new priced feature, High Concurrency Option, is required for cube clustering
that is itself required for workload balancing and failover capabilities.
Previously DB2 OLAP Server version 8.1, the OLAP kernel operated in
single-threaded mode and performed load and calculation serially, even if the
engine was highly parallel for queries.
In a serial calculation (see Figure 1-2) all steps run on a single thread. Each task
is completed before next is started.
Serial Calculation
Calculate
process
waiting
task5
waiting
task4
Only 1
Thread
waiting
task3
waiting
task2
Operating System
executing: task1
CPU 1 CPU 2
processing idle
If you enable parallel calculation (see Figure 1-3), the OLAP engine analyzes all
the tasks for a calculation pass and breaks them down into tasks that can run
independently of each other. DB2 OLAP Server passes the tasks that are ready
to be executed to the operating system using multiple threads simultaneously on
up to four threads.
Calculate
process
CALCPARALLEL 2
or
SET CALCPARALLEL 2
Threa
d1
re a
d
Th
2
waiting
waiting
task6
task4
waiting
waiting
task5
task3
Operating System
executing: task1, task2
CPU 1 CPU 2
processing processing
If DB2 OLAP Server determines that parallel calculation does not improve
performance or if the formulas have complex interdependencies (cannot break
them in subtasks to run independently), the DB2 OLAP Server executes the
calculation process in single mode, even if the parallel load is enabled.
You can enable the parallel calculation at the server level, application level, or
database level.
Table 1-2 shows the new parameters and commands to administrate parallel
calculation.
For better performance, DB2 OLAP Server divides the load processing in two
steps: preparation and write (see Figure 1-4).
In the preparation step, the DB2 OLAP Server organizes the data source in
memory. In the write step, the DB2 OLAP Server writes blocks from the memory
to the disk.
You can define how many threads you can run for preparation steps and how
many threads you can run for write steps. For example, in Figure 1-4 we are
using three threads for the preparation step and two threads for the write step.
Input stage
Portion of
data
3
threads
Preparation data
stage organized
2
threads
data cache
(memory)
Write stage
blocks
data
Disk
Now, using DB2 OLAP Server version 8.1, you can decrease the export process
time using the new parallel export. The parallel export is executed on multiple
threads and each thread retrieves data and writes it to a corresponding export
file allowing I/O parallelism, writing to two or more I/O devices simultaneously.
DB2 OLAP Server can also exploit I/O parallelism, splitting the output in multiple
files, until 8 files. In Figure 1-5, the database e-bank is exported using two
threads. Two output files are generated.
Export
Output
Output
file1
file1
To execute the export in parallel, you can use the new PAREXPORT command
(ESSCMD) or you can use the MaxL export command.
Now you can manage OLAP Servers objects quickly, in a visual, familiar setting.
You can perform multiple tasks simultaneously, run process in the background,
perform cross-server operations, and manage active user sessions and requests
from anywhere in the enterprise.
In contrast, when using Application Manager, you cannot run processes in the
background. While you are running processes such as calculation or load using
the Application Manager, you lose other operations. However, when using
Administration Services, you can save time by running common processes in the
background, and you can continue administering your OLAP servers.
HTTP TCP/IP
Data Store
HTTP
Listener
It allows you to migrate applications, databases, filters, users, groups from one
server onto another server without any user interaction.
The current available hardware platforms are: AIX and Windows and the DB2
OLAP Servers are DB2 OLAP versions 1.1, 7.1 and 8.1. Figure 1-7 shows some
migration scenarios.
You can use one of the following options to migrate your data:
Migrate security data directly from one OLAP Server to another OLAP Server
on the same platform.
Copy security data into a data file that can be re-used to create security data
on any of the supported platforms.
Migrate security data onto an OLAP Server using a data file
To run the Security Migration tool, the installation program must be downloaded
from the DB2 OLAP Server Web site:
http://www.ibm.com/software/data/db2/db2olap
Now you can access these functions and operators using the EssListCalc
API function.
For this book, we also had the opportunity to work with two real-life case studies,
which cover a regional portion of E-banking and Online Investment in a financial
institution.
Even though this is the basic version of DB2 OLAP Server, you have the
functions for building, maintaining, and using OLAP applications. The
functionalities that are part of this configuration include:
Application Manager for building, deploying your OLAP applications, and
monitoring users, security and overall OLAP database administration tasks.
OLAP Miner for mining your multidimensional applications for deviations in
data. This is a great way to explore the story behind your data.
Spreadsheet add-in for browsing, analysis, drilling and slicing your data. This
feature comes also with write-back functionality, where you can update your
OLAP application from the client. This is helpful, for example, if you are doing
budgeting and forecasting.
To access your OLAP application besides spreadsheet, you can use a wide
variety of products that support the Essbase API, for example, Analyzer, Brio
Enterprise, and Business Objects.
With Application Manager, you build the outline for the multidimensional OLAP
application. You can use relational databases as well as flat files for sources, and
by using the Data Preparation Editor in Application Manager, you can edit the
format of the data that will be loaded from the data source to the OLAP cube
through DB2 OLAP Server. After the load and calculation of data in the OLAP
cube, you can access the data using the spreadsheet add-in and mine the data
using OLAP Miner.
TCP/IP
TCP/IP
OLAP
RDBMS Command
TCP/IP
ODBC
ODBC
source Interface
In Se
O rat r
LA io
te r v
g e
P n
Hybrid
Analysis
relational
access
through OLAP ODBC
Metadata
TCP/IP
ODBC
Catalog
API
API
DB2 OLAP Server 8.1
OLAP Cube
Once the OLAP model and metaoutline is built, loading and calculation of data
takes place, and the OLAP Application is created. This task is done through DB2
OLAP Server, which also handles end-user applications requests for data,
whenever data is ready for use.
Enterprise Services uses and delivers the Essbase Java API that is not included
with the other APIs.
The tool will update the essbase.sec file on the destination server, and generate
log files which include information on the servers and data chosen to migrate, as
well as a data file which includes the security data migrated. The data file can be
copied onto a destination server and used to migrate data (it can be used as a
backup of security data as well). It also migrates the data files associated with the
applications and databases, including outlines and reports.
Analyzer has been completely re-architected for this new version and offers new
major changes and enhancements:
The user interface has been reworked, with new options such as these:
– Seamless access to Hybrid Analysis applications.
– Relational integration that provides an improved user interface for defining
direct access to relational databases, in addition to OLAP Servers.
– Before this version, Analyzer could only access the multidimensional
cubes created by Application Manager or Integration Server.
– Users are now able to access data regardless of where it is located, and
merge relational and multidimensional data together on a single report.
The user interface has been re-architected to leverage the power and
performance of open systems platforms, including Windows NT/2000/XP, AIX,
and Sun Solaris.
The Hyperion Analyzer Java-based Web Client (100% Java) has been
enhanced to contain major features from the Hyperion Analyzer 5.0.2
Windows client and Design Tools.
TCP/IP
TCP/IP
Administra-
TCP/IP OLAP Administra-
tion
RDBMS ODBC
ODBC Command tion
Services
source Interface
In
O r at r
Services
LA io
t e er v
g e Client
P n
API
S
Hybrid
Analysis TCP/IP
relational HTTP
access Enterprise
JAPI
through OLAP ODBC TCP/IP
Enterprise
CORBA
Services
ODBC Metadata Services
EJB
Client
Catalog JAPI
ODBC
API
API
Domain
API Storage
DB2 OLAP Server 8.1
Flat
files
The E-banking application uses the following 9 dimensions (see the outline in
Figure 2-4):
1. Measures
This dimension contains transactions.
2. Time
This dimension contains, for this case study, only fourth quarter 2001 and first
quarter 2002. Hierarchy is Year - Quarter - Month. There are 193 members in
the dimension.
3. Customer
The customer segment is divided into Private and Corporate customers.
Private customers are divided into 3 sub-segments: segment 1, 2, and 3.
Corporate customers are divided into 5 sub-segments: segment 4, 5, 6, 7,
and 8. All segments are divided into existing known sub-segments. The
element “Others” is a grouping covering employees, internal customers, and
non-segmented customers. The segmentation is based on a set of rules
about the customer’s engagement with the bank. This dimension consists of
32 members.
4. Transaction type
Transaction types are divided into Logon, Payment, Account request, and
Trades. Payment is divided into different kinds of payments, such as money
transfers. There are 12 members in the dimension.
The cube is updated each first working day of the new month, with data from the
past month. Data amounts are approximately 10 MB, which is the expected
growth rate of the application monthly.
The data sources actually available to satisfy these business needs are the data
warehouse data and data from the Web log, detailing the application use.
Keeping the point above in mind, the business requirements for analysis are
related to either marketing or maintaining the analysis factors. As described later
in detail, functionalities can be removed or modified in ways that affect the
analysis environment.
The business needs can be separated into three groups and organized with
specified prioritized questions to answer:
2. Segmentation/marketing
These are some business questions concerning this group:
– How can you do a segmentation of the users related to the use of the
application? Is there a common behavior within certain groups of
customers? You can use this information to define groups of customers
toward which you can direct your marketing efforts.
– How can you analyze the groups into which customers are split (by age,
geography, sex, and so on) in order to decide whether to direct your
marketing efforts towards existing customers who are not using the
specific functionality, or towards completely new potential customers?
– Is the use of the part of the application without logon increasing or
decreasing, as compared to the part where you have to log on? This will
help you to decide which information has to be in the public area of the
application, in order to attract more users to become actual customers of
the application.
3. Application development
These are some business questions concerning this group:
– By grouping the functions in the online investment application and
analyzing them, how can you define new functionalities or extend existing
ones? This is where the interest is greatest or the benefit is largest for the
bank or the customer (for example, if the function to create your own stock
lists is very popular, then you might want to consider extending it to 6 or 8
lists).
– How can you get a tailor-made application and market the application
towards specific groups of customers with specific needs? This is a matter
of customizing the application to fit the different segmentations of
customers (different segments need a different level of service/quality).
– How can you know if certain services in the application are used or not?
Especially, you could use this information to decide on the following:
• If some functions in the application need to receive more attention.
• If parts of the application should be removed from the application.
– How can you know if there is an economical benefit in further development
on specific services or functions in the application?
The Integration Server model (see Figure 2-5) designed to cover the business
requirements contains 1 fact table and 11 dimension tables. The fact table is the
amount of transactions for each combination of members in the dimensions The
fact table data is loaded, pre accumulated into DB2 UDB weekly.
Since the Online Investment application on the Internet has a public part where
the customer does not need to login, the customer based data such as gender,
age, and so on is only known when the customer has been logged in to the
application. To cover this, in the dimension building, the dimensions related to
customer information have a member called “Unknown”.
Based on the definitions of the dimension tables to suit the business needs, the
metaoutline appears as shown in Figure 2-6.
Enterprise
Services Console
Corinne
(W2k)
Sicily (AIX)
Fermium W2k Client
(W2k) DB2 OLAP Server
Administration Integration Server
API Services Console
Administration DB2 OLAP Miner
Services
Hybrid Analysis
(Clustering, failover
and parallel calc)
Figure 2-7 Cases and test setup for E-banking and Online Investment
The Administration Services Console connects to the Fermium server where the
Administration Services Server is running, and holds the administration link
between Cayman (AIX), Sicily (AIX) and Corinne (W2k) servers.
Cayman is a 2-way AIX server and is used to test OLAP Miner and the E-banking
application.
Sicily is a 4-way AIX server and is used to test parallel load and calculation. This
is also where OLAP Miner performances are tested on both the E-banking
application and the Online Investment application.
To the end user, the cube employing Hybrid Analysis still looks like a single,
seamless multidimensional application. The Hybrid Analysis part of the cube is
computed dynamically by SQL queries against the relational source database.
This means that Hybrid Analysis accesses member names and cell values in the
relational data source through dynamically generated SQL, instead of being
pre-loaded and pre-calculated in OLAP Server.
The hybrid analysis portion of the cube can be thought of as a logical (or virtual)
partition of the cube residing in a relational database. The member names and
cell values of this portion of the cube are computable at query time by querying
the source relational database using information stored in the Integration Server
metadata catalog.
Hybrid Analysis is similar to any drill-through ability, in the sense that it provides a
way of getting more detailed data from the relational source. However, it is more
convenient, because you don't have to create and maintain several Drill-Through
Reports.
In the following sections, we take you through the steps required for setting up
the Hybrid Analysis environment. We cover these topics:
When to use Hybrid Analysis, and the architecture for setting up the Hybrid
Analysis architecture
The challenges associated with using Hybrid Analysis
Implementing the Hybrid Analysis application using Integration Server
How Hybrid Analysis does the query on the relational part of the OLAP
Application, and use of the reversing capability
Setting variables for the configuration file
How to do tracing on the Hybrid Analysis environment
Using Hybrid Analysis gives theoretical room for building cubes with thousands
or even millions of members. This means, for example, that if you have the need
for adding customers in your OLAP application as single members, this is now
possible through Hybrid Analysis.
If you have several cubes located on the same physical machine, a major
advantage is gained using Hybrid Analysis. With Hybrid Analysis, the need for
multidimensional storage is significantly reduced (see Table 3-2 on page 55), and
the reduction in storage space is a significant consequence. These advantages
result from the following characteristics:
Data from the relational input source used in the OLAP application does not
need to be loaded, and therefore is not calculated.
Calculation time dominates over load time, so reducing calculation time
significantly decreases the total application build process.
Data stays in the relational source and is accessed through Hybrid Analysis.
DB2 OLAP Server requires the entire outline for the multidimensional part to
be loaded into memory. When DB2 OLAP Server loads an outline into
memory, each member requires about 500 bytes of space. An outline with
one million members will require 500 MB of memory. You can use Hybrid
Analysis to make your outlines smaller and require less memory when they
are loaded.
Running more OLAP applications by using Hybrid Analysis on the same server is
now possible due to the extreme reduction in calculation time. This allows for
more frequent updates, and less storage space is now needed. Hybrid Analysis
therefore makes it possible to run more applications on the same physical server.
In summary, using Hybrid Analysis offers many benefits. You can use Hybrid
Analysis and place part of the OLAP application in relational storage whenever:
Outlines have a large number of members.
Outlines require large amounts of detailed data.
The portion of the hierarchy stored in relational is infrequently accessed.
It is necessary to reduce disk storage.
It is also necessary to significantly reduce memory usage for member load.
Shorter batch windows are required for updating the OLAP application with a
higher frequency.
The other relational source, shown in Figure 3-2, contains the catalog which
holds the metadata that describes the outline of the hybrid cubes. This catalog is
used by Hybrid Analysis during queries execution involving the relational
sources.
TCP/IP
TCP/IP
ODBC RDBMS
Query &
Reporting
Hybrid Analysis
ODBC
RDBMS
With Integration Server, you build the Hybrid Analysis application, using the
Model and Metaoutline functions of the Integration Server. The Hybrid Analysis
internal function reads the information in the catalog about the Model and
Metaoutline and uses this metadata to formulate the queries that are run against
the source database.
Both RDBMS for the OLAP catalog and for the data source can reside on other
platforms if this is more feasible in a day-to-day operational environment.
To query the relational part of the hybrid cube, the OLAP server uses the model
and metaoutline information in the Integration Server metadata catalog to obtain:
Information on which levels are in the Hybrid Analysis part of the application,
from the lowest level and upwards
The OLAP server requests the Hybrid Analysis function for data, but only when it
cannot get data from the regular part of the cube.
From the application developer’s point of view, the challenges described in 3.5.2,
“Hybrid Analysis member transformation” on page 52 require consideration when
building hybrid cubes:
The multidimensional part of the Hybrid Analysis application will be the same as
the one built without the relational part. Because of that, Integration Server
ignore filters on Hybrid Analysis levels during data load and only sums up the
values of these levels. To make the data consistent, Hybrid Analysis also ignores
filters and assumes addition consolidation on relational levels.
Recursive hierarchies are not supported for Hybrid Analysis, and no manual
changes should be made to member names using Application Manager. Level 0
data should always be in the regular portion of the cube. The level number is
undefined for Hybrid Analysis members. Also, you can’t use Hybrid Analysis in
the time dimension if there are Time Balance members such as inventory, and
you should not use SQL Override in metaoutlines with Hybrid Analysis.
In the following sections, we explain some of the limitations associated with using
Hybrid Analysis. This is important because some of the limitations may impact
the design of the OLAP Application.
Note: When setting up ODBC driver on both OLAP Server and Integration
Server, use the same ODBC level on both. The names for the metadata
catalog and the source database must be the same on both physical servers if
different.
The metaoutline is used by Integration server and DB2 OLAP Server at query
time, and is based on the structure of the specified OLAP Model. The
metaoutline is the template containing the structure and rules for creating the
physical outline of the application.
Figure 3-4 Enabling Hybrid Analysis from this point in your application
The level you just enabled for Hybrid Analysis is now shown with an icon
illustrating half a cube, and half a relational database, as seen in Figure 3-5.
Figure 3-5 only shows one level being in the Hybrid Analysis relational part of
the application. Members from more than one dimension can reside in the
relational Hybrid Analysis part, while members from the account dimension
cannot be placed in the Hybrid Analysis part of the application.
When using Hybrid Analysis, the OLAP server outline needs to be consistent
with the metaoutline from which it was built, and the relational data source
needs to be consistent with the model/metaoutline. If the regular
multidimensional part of the application does not change, you can use
Update Hybrid Analysis Data... instead of loading the members again. Go to
Outline --> Update Hybrid Analysis Data... as shown in Figure 3-6.
Figure 3-6 Selecting the Update Hybrid Analysis Data: option from the menu
The OLAP application and database dialog box appears. You should now
select the Application Name, Database Name, and application you want to
synchronize (see Figure 3-7). You choose the Member Load Filter, which in
this case is default. For Command Scripts we have shown a recalculation of
our Brokers Commission as an example. See more on calculation scripts in
IBM DB2 OLAP Server 8.1: Database Administrator’s Guide, SC18-7001-00.
Table 3-1 shows what transformations are supported by hybrid cube, and which
are not.
Even though the user interface supports the grandparent name prefix with
separator, this functionality does not work in real life. As a workaround, the
unsupported transformations can be implemented through a view. This means
that you will need to create a view on your source database, use SQL to
implement the prefix or suffix changes, and then use this view as a data source in
Integration Server when building the model.
If the supported transformations are used, the following limitations apply to the
transformation rules:
1. Database values cannot have a separator character similar to the one being
used for prefix or suffix.
If the metaoutline includes the relational part of the hybrid cube, the Integration
Server will warn if any transformation is not reversible, when the metaoutline is
saved or verified (see Figure 3-8). This check only applies for prefix or suffix
transformation rules.
We then built the model and metaoutline in Integration Server, sorting the dense
dimensions first. Then we defined the level for Hybrid Analysis, where we chose
to build first one level (as shown in Figure 3-5), and to test the member load, data
load, and finally the calculation. The same procedure was carried out for the two
levels of Hybrid Analysis and for the multidimensional application.
Before each test, we restarted DB2 to clear the buffers, to assure consistency in
the test.
Table 3-2 Test results: loading and calculating the Online Investment application
Multidimensional Hybrid Analysis Hybrid Analysis
1 level 2 levels
Member load 18 21 18
(Seconds)
Application Manager only loads the outline, which doesn't contain members of
the relational Hybrid Analysis part. Therefore, it cannot show the Hybrid Analysis
relational members. However, it can tell whether a dimension has Hybrid
Analysis level(s) or not and whether a leaf level has relational children. You will
only see the regular part of the cube. The leaf levels of the multidimensional part
of the cube can tell wether relational children exists.
The option to enable or disable the Hybrid Analysis is showed only when
selecting dimensions that contains Hybrid Analysis members. When a dimension
contains Hybrid Analysis, you can see a comment in “Hybrid Analysis” in the
outline editor. See Figure 3-10.
When working with Hybrid Analysis, you can use Administration Services only to
enable or disable the Hybrid Analysis. Other changes done in the outline using
Administration Services will not have any effect in Integration Server. When
working with Hybrid Analysis models and metaoutlines, it is recommended that
Administration Services Manager should only be used as a viewer in regards to
the application.
To illustrate the query efficiency of Hybrid Analysis, we have set up a small test
scenario on the online Investment cube. This is the same application as shown in
Table 3-2 with one level Hybrid Analysis. We are doing a simple query where we
are drilling down on Branches and Number of transactions.
Unique indexes added on all keys in the rest of the dimension 0:08
tables. Still only index on ORG_ID in the fact table.
Index on all keys in the dimensions and on all keys in the fact table. 0:02 - 0:06
For the test shown in Table 3-3, it is important to note that during testing we did
experience some network difficulties which can have a negative effect on the
measured query response times.
The following parameters are important in regard to Hybrid Analysis, and are set
in the essbase.cfg file located in the ARBORPATH/bin directory:
Enable or disable retrieval of members (HAENABLE). This is the most important
setting, since it enables or disables Hybrid Analysis on the server.
HAENABLE is explained in 3.7, “Queries in the Hybrid Analysis environment”
on page 68.
Size the amount of memory for member caching (HAMEMORYCACHESIZE). This
reduces the amount of member lookup SQL, while data lookup still takes the
same duration.
Determine the number of maximum connections per OLAP database
(HAMAXNUMCONNECTION).
These parameters can be set in the essbase.cfg file on the server, but changes
will not be effective before you have restarted the OLAP Server. For tips and
tricks regarding how to build your OLAP cube, define dimensions, members, and
set up calculation, define dense and sparse data, see the DB2 OLAP Server:
Theory and practices, SG24-6138-00.
Hybrid Analysis finds out member name duplicates by looking at the member
cache to see if the same member but different ancestor already exists. The
member cache can only store a certain number of members. If the duplicate
member is not in the member cache at that time, Hybrid Analysis will then not be
able to spot the duplicate member. Therefore, member name transformations
should always be reversible (see 3.5.2, “Hybrid Analysis member transformation”
on page 52.
If your query exceeds the HAMEMORYCACHESIZE you will get the following error
message as shown in Figure 3-12. The desired response to this consists of
increasing the HAMEMORYCACHESIZE.
Syntax: HAMAXNUMCONNECTION n
n: the value of n specifies the number of connections per OLAP database that an
OLAP Server can keep connected to a relational database. The default is 25, and
the maximum is n=100 (see Example 3-4).
Note: You may need to set the value for the HAMAXNUMCONNECTION higher if, in
Integration Services Console, you set the security mode to use the OLAP
Server user ID to connect to the source database. Refer to Integration
Services online help for more information on the security mode setting.
Remember that there are two security modes in accessing the relational data
source:
1. Using the OLAP Server user ID and password
2. Using the user ID and password stored in a Hybrid Analysis outline.
If you choose to use security mode (1), different clients having different DB2
OLAP Server user ID and password will cause multiple connections opened, but
if you choose security mode (2), all clients may share only one connection. See
3.11, “Security configuration for Hybrid Analysis” on page 85.
Syntax: HAMAXNUMSQLQUERY n
n: The value of n is the number of simultaneous SQL queries per OLAP Server
query session. The default is 50, the minimum is n=10, and the maximum is
n=255.
In Example 3-5, the maximum number of SQL queries per OLAP Server query
session is set to 10 (see Example 3-5).
Example 3-5 Number of SQL queries per OLAP Server query session is set to 10
HAMAXNUMSQLQUERY 10
Syntax: HAMAXQUERYROWS n
n: The value of n determines the maximum number of rows retrieved per SQL
query (see Example 3-6). The default is zero, meaning that no row limit is
applied.
Example 3-6 OLAP Server processes up to 100,000 rows per SQL query
HAMAXQUERYROWS 100000
Syntax: HAMAXQUERYTIME n
n: The value of n determines the time limit per query in seconds. The default is
zero, meaning that no time limit is applied.
This is a recommended setting to cope with frustrated end-users waiting for their
query to return a result.
Syntax: HARETRIEVENUMROW n
n: The value of n specifies how many rows to process at a time. The default is
100, and the maximum is n=200.
In Example 3-8, an OLAP server query processes rows from each SQL query in
sets of 50 rows.
The effect of these settings is also aimed at controlling the behavior of the
end-user community indirectly.
These variables should be set up to optimize the response time from the Hybrid
relational part of the application.
Important: Bear in mind, that with Hybrid Analysis, we are deferring work to
query time that could have been done at load/calculation time, because we
are querying a relational non-precalculated application.
Before using Hybrid Analysis, you have to enable the functionality in the
essbase.cfg file. HAENABLE turns on or off the ability to retrieve members of a
Hybrid Analysis Relational Source.
The operations will span the relational portion when all of the following
conditions are met:
Setting HAENABLE in “essbase.cfg” file as TRUE (default is FALSE)
Relational portion of the outline dimension is enabled using Application
Manager
HYBRIDANALYSISON in the report (default.)
The operations will NOT span the relational portion when any of these conditions
are met:
HAENABLE in the essbase.cfg file has been set to FALSE (default).
The relational portion of the outline dimension is disabled using Application
Manager.
There is a HYBRIDANALYSISOFF somewhere above it in the report.
Here is the priority among the settings for Hybrid Analysis (highest to lowest):
1. Setting of “essbase.cfg” (HAENABLE=TRUE)
2. Setting in outline (see Figure 3-16)
3. Setting in reports (HYBRIDANALYSISON / HYBRIDANALYSISOFF)
When you send a request to query the relational data source, the Hybrid Analysis
function does two things:
1. First, it resolves member names, which includes the reversing that makes it
possible to get the database values; and second, it runs the query against the
dimensional tables.
2. Gets the data from the cell, which means first running SQL queries against
the fact table. This is often done with joins to multiple dimension tables.
Getting data gives you the opportunity to combine queries together into one
query and thereby increase performance. Alternatively, large queries can be
broken into smaller parts, or sub-queries, and be run in sequence.
The same example is shown in Figure 3-16 with Integration Server. It also shows
the way you control how many levels in your application are multidimensional,
and how many will become relational. An important notion is that all Hybrid
Analysis levels must be at the bottom of the member hierarchy as shown here.
For example, you cannot have Area_ID in the relational database and the child
Branch_ID in the multidimensional part of the application.
It is possible to have more than one level in the relational part of the application,
but for performance reasons, you typically only enable the lowest level for Hybrid
Analysis.
Member name lookup SQL statements are generated for each hybrid analysis
dimension until the member is found. Example 3-11 shows the SQL generated by
Hybrid Analysis retrieving transactions for specific branches in region East.
A simple query to Hybrid Analysis may result in more than one SQL query
against the data source, as shown in Example 3-11. This can be slower than the
multidimensional application, so if we can find space in the memory cache, this is
a way to improve the query response time. A way to deal with this is to use
member caching with HAMEMORYCACHESIZE.
Leaving a level or two of the model in the relational source system instead of
loading them into the cube can dramatically reduce cube size. Smaller cubes
load and calculate faster and consume less disk space and processing power. If
If you are using DB2 UDB as the relational source, you can improve performance
by building DB2 UDB indexes and aggregates that the optimizer can use to
speed up query processing. Although the results can be significant, delivering
sub-second response times for many relational queries, you are now managing,
tuning and using resources for both DB2 UDB and DB2 OLAP Server
aggregates.
Hybrid Analysis was designed to meet the requirement for one more additional
level of detail so that users do not have to leave their OLAP client tool and use
another tool to get to the core of the problem or opportunity. At this point in the
analytical process the regular slice, dice, and drill OLAP analysis is done, the
final step is the drill that takes you from a set of dimensions at low levels down
one more level of detail to find the actual customers, products or time periods
involved. This means we expect SQL queries to be generated relatively
infrequently, we expect them to be low level across many or all dimensions and
we expect a relatively small result set.
Hybrid Analysis was not designed to offer consistently high speed interactive
ad-hoc slice dice and drill across both cube and relational data. Those areas of
the data where your users will explore summaries in true OLAP fashion belong in
the cube. Those areas of the data where the queries are infrequent, low level and
well qualified belong in the relational source for Hybrid Analysis.
In this rest of this section we document the results of some research we carried
out to determine the overall impact of Hybrid Analysis on the OLAP server and to
illustrate the effect of some good and bad design choices. We carried out these
tests using the same OLAP model discussed earlier in this chapter. For
comparison we built the complete cube as it is currently used in production and
we built a variant where we left one level of one dimension in DB2 UDB and
made it available through Hybrid Analysis.
Scapa StressTest works by simulating multiple DB2 OLAP Server users in a way
that is, as far as the system is concerned, completely indistinguishable from real
users. By measuring the performance seen by virtual users, it is possible to
understand how the system would behave under a corresponding level of real
usage. Scapa StressTest works by first recording user interactions into test
scripts. The capture component interprets the “rules” that govern the application
interface so that the recorded user interactions can be represented as a
replayable script.
The core technology includes methods of analyzing the OLAP interactions that
were recorded to identify the relationships between objects. The outline is then
searched for similar relationships to find sets of objects with the same
relationships. This adds plausible variables to the initial user interactions that
were recorded. Generalization, the automatic generation of plausible variants of
actual user behavior, enables rapid, accurate simulation of the behavior of large
numbers of users doing different tasks on an OLAP application. The
generalization process allows entire applications to be thoroughly analyzed and
tested.
Our test scenario involved capturing a set of user interactions that began with a
spreadsheet template. Having logged in to DB2 OLAP Server, we retrieved data
into this grid and then drilled down from the top of the hybrid enabled dimension
into the relational data. Along the way we refined the analysis by using keep only
operations. Our drill path was from the cube into the relational data along one
dimension. All other dimensions were at level 0. This base scenario simulates
the type of focused drill down into the relational data that Hybrid Analysis was
designed to support.
We fed this basic scenario into the Scapa generalization engine and accepted
the default of 128 variants. Scapa analyzed the analysis pattern, matched it
against the outline and generated a generalized script of variants on our basic
scenario. We were able to use the same generalized script against both the cube
and the hybrid versions of the OLAP model.
Figure 3-17 Scapa workload running against the DB2 OLAP Server cube
Deviation (%)
Deviation (%)
Deviation (%)
Failures(%)
Think time
Average
Average
Average
Sample
Control
Mode
Figure 3-18 shows a screen capture for the same test suite run against the hybrid
model.
Deviation (%)
Deviation (%)
Deviation (%)
Failures(%)
Think time
Average
Average
Average
Sample
Control
Mode
From these results we can see that when applied correctly Hybrid Analysis
performs very well. The change from cube to hybrid resulted in an increase in
average task time from 19.76 seconds to 20.198 seconds and a corresponding
increase in deviation from 0.072% to 2.192%. Similarly, the task rate decreased
from an average of 2.018 to 1.945. These increases are expected and are more
than manageable.
The results documented so far indicate that well designed use of Hybrid Analysis
can increase the scope of analysis with manageable impact on performance and
system load. The next series of tests were designed to illustrate some of the
dangers of bad design. For these tests we simply timed some spreadsheet
retrievals against the hybrid model when conducting slice dice and drill
operations at summary levels. These tests indicate possible results if hybrid
dimensions are included in general analysis. Key design points to consider
include these:
1. If a query includes a relational member, that entire query has to be resolved
through SQL run against the database. Imagine that you open up the default
spreadsheet view of a cube with all dimensions at the highest summary level
and then drill down on one dimension until you hit the relational members.
The last drill operation will be resolved by running a very large aggregation
using SQL.
Table 3-6 details response time for queries that illustrate these design points.
1 All dimensions at top level Drill down on hybrid dimension 395 seconds
into relational data for 30 level 0
members
3 All cube dimensions at top Drill down on one cube dimension 51 seconds
level to 10 level 2 children
1 relational member from
hybrid dimension
These tests have shown that Hybrid Analysis can perform very well when
designed with care. The scope of OLAP analysis can be extended to include
additional level of detail with acceptable response times and little impact on the
server. We have also shown that badly designed applications of Hybrid Analysis
can result in large numbers of complex SQL queries that aggregate large
volumes of data to summary levels.
Used with care, Hybrid Analysis can offer users an easy way to complete
analytical tasks at a detail level, a requirement many users have asked for. OLAP
designers have struggled to balance the requirement for more detail against the
time and resources the resulting cubes consume. Hybrid Analysis offers
designers a way to allow users to seamlessly navigate to more detailed
information without causing massive cube explosion.
A Hybrid Analysis cube can still have Drill-Through Reports for its
multidimensional part of the application. Drill-Through Reports generate static
reports based on single dynamic SQL statement and it is a flexible tool when you
need reports for different intersections in the cube or you want to see other table
columns. The Hybrid Analysis is a live analytical environment, where multiple
dynamic SQL queries can be generated automatically (see Table 3-7), if needed.
Table 3-7 Hybrid Analysis and drill through reports — general comparison
Hybrid Analysis Drill-through
Predefined SQL with dynamic execution Predefined SQL with dynamic execution
Easy to administrate compared to You need to create all SQL queries and
managing several drill through reports create each report
Even though there are some characteristics common to both Hybrid Analysis and
Drill-Through Reports, both serve different purposes in an OLAP application.
Note: DB2 OLAP Server V8.1 only supports DB2 UDB and ORACLE. Other
RDBMS access should be provided with fixpacks.
For a better performance and also for easy maintenance, the implementation of
Hybrid Analysis feature requires a star schema model. It also can be
implemented on snowflake data model or on a star-like model.
We define star-like model as a relational model that physically is not a pure star
schema model implementation. However, using features from the relational
A star schema model makes very easy and simple to build an OLAP model with
Integration Server, because it has the same concept of a multidimensional
model. Snowflake models are more complex to keep (too many tables) when
compared to a star schema model. Many more joins will be required in order to
build the model, and consequently, the cube. When using a snowflake model to
build an OLAP cube using Integration Server, you can have performance
problems during the data load process, because more joins are required to
process.
Entity Relationship (ER) models may require more development effort to build
the OLAP model when using OLAP Integration Server. Additional tables or views
may be required.
Important: Independently of the data model that you are using to build your
OLAP Model, for performance reasons, you need to have a very good
understanding of the joins and queries that you need to perform and then
define a very good index strategy for that.
Attention: DB2 OLAP Server V8.1 when using Hybrid Analysis and
Drill-Through does not support dimensions with parent child hierarchies.
It means that when creating a star schema model for dimensions with variable
hierarchies (different levels within the same dimension), you need to create a
model that will have one column for each level of the hierarchy.
In other words, the dimension table should use generation instead of parent
child.
Important: When you start the OLAP Server, make sure DB2 UDB, where the
relational source is located, is running. Otherwise, you will need to do a
db2start, and after, that restart the OLAP Server.
In Integration Server, you choose the security settings dialog box by right-clicking
the top of the outline in the right panel of the metaoutline window and selecting
Properties as shown in Figure 3-19.
Use the OLAP Server user name and password to connect to the data source
option. Select this option to use a common OLAP Server user name and
password to connect to the data source when the hybrid request is made.
Use the user-defined user name and password to connect to the catalog and
server option. Select this option to use a user-defined user name and
password to connect to the OLAP Metadata Catalog and data source. With
this option, the administrator must keep OLAP servers userids and passwords
in synchronization with DB2 userids and passwords.
One drawback of both traces is that messages of all threads are logged to the
same file. It may be hard to filter the messages related to the error thread.
By turning on the CTrace you can see By turning on performance trace you
what the SQL statement looks like, can see, in a query, how much time
what happened before the error. was spent for generating SQL
statement, for SQL execution, for
member lookup, for data lookup.
You use the ESSCMD command line to turn the CTrace and performance trace
ON or OFF, as shown in Example 3-12. Turning on traces will affect performance,
so remember to turn them off whenever you are not using them.
If you clear the log by deleting the essha.log, you will need to restart Integration
Server as well as DB2 OLAP Server, before a new essha.log is written.
When you mine an OLAP application, you will use the OLAP Miner Client, as
shown in Figure 4-1. From the OLAP Miner Client, you have access to the OLAP
Server via the OLAP Miner Server and the Essbase API. This gives you the
advantage of not only using the OLAP Miner Client, but also viewing, drilling, and
reporting your mining results in Excel or Lotus 1-2-3 through the spreadsheet
add-in. Furthermore, the OLAP Miner Client gives you the possibility to write
Linked Report Objects (LROs) directly back to your OLAP application so these
results will be accessible to other users of the OLAP application, but without the
need to perform the mining runs themselves via the OLAP Miner client.
TCP/IP
Query &
Reporting ODBC RDBMS
Hybrid Analysis
TCP/IP
ODBC
RDBMS
Figure 4-1 Generic OLAP Miner, OLAP Server and Hybrid Analysis architecture
OLAP Miner can ultimately save the information user much time typically spent
“just browsing” and actually go directly to the results when needed. It may even
highlight areas that have been overlooked by business analysts.
So let us move on and investigate what deviation detection actually is. We will
show when you can gain valuable information from using OLAP Miner, and
explain how to further interpret, investigate, and deploy your mining results.
You may recognize the normal distribution graphically as the familiar bell-shaped
graph shown in Figure 4-2. The standard deviation tells you how tightly the
various observations are clustered around the mean in a dataset.
OLAP Miner deviation detection looks through your data in your multidimensional
application for atypical or deviant values. Instead of trying to discover values in
your data that might be higher or lower than expected through comparing data
with slicing and pivoting, OLAP Miner does this automatically.
OLAP Miner is not built for hypothesis support or trend analysis, but it can
support and confirm if you have a hypothesis that some of your stores, for
example, are performing significantly lower or higher than usual. OLAP Miner can
also be used as starting point for further trend analysis, if there is a result of
significantly lower performance on some stores compared to others.
In Example 4-1 the definitions for the subcube are shown. We can see that the
subcube mined has 3 dimensions, 140 members, and 3 hierarchies. This is also
illustrated in Figure 4-3.
Collecting the data from the cube for the mining process creates the subcube as
shown in Figure 4-3. The deviation detection definition is similar to an OLAP
outline, and the OLAP database to an OLAP Mining subcube database; but it is
created by the OLAP Miner, and not the OLAP Server. Figure 4-3 shows how the
deviation detection definition created in the OLAP Miner client mines the specific
defined area called a sub-cube. After the creation of the deviation detection
algorithm, OLAP Miner applies to the subcube.
Figure 4-4 OLAP Miner calculates an expected value for each cell in the subcube
The characteristics of deviation detection are that, as the context changes — that
being the observations — the deviation will change. For example, if one product
sells higher than expected one week and therefore is shown as a deviation, it
may be shown with a lower magnitude the following week, or it may not even be
part of the highlighted deviations. As you can see, deviation detection requires a
lot of observations, and deviation detection on a few hundred observations is not
as reliable as applying OLAP Miner on tens of thousands or even millions of
observations. In other words, deviation detection is powered by data.
Dimensions in 9 11 11 11
application
Dimensions in mined 3 3
subcube
Members 140
The comparison of the E-banking application and the first Online Investment
application is significant because they are mined equally fast. But you can notice
that while E-banking is without missing values, Online Investment has a number
of missing values, even though relatively low compared to the total cube size.
For better performance, the retrieval buffer of the OLAP application you are about
to mine should be set to 2 KB. This is done by choosing the application and
database for which you want to apply this setting, and then from the menu bar, by
selecting Database —> Settings in Application Manager.
Figure 4-5 2 Kilobytes set for the E-bank application in Application Manager
When mining the created subcube, additional space is needed. Make sure the
AIX is enabled for large files, and that you have increased your temporary file
space if necessary by changing the location of all data files and temporary files.
You do this in OMServer.cfg with the data_dir setting (see Example 4-2).
Figure 4-6 OLAP Mining either just the member or the member and descendants
Important: All the data for mining needs to be in the multidimensional part of
the application.
We have created two scenarios on behalf of the E-bank cube to get different
views on the data in the E-banking application, which contains information about
the banks customers use of online banking. (For a further description, see 2.2.1,
“E-banking case study” on page 28). In Figure 4-7 you can see the outline of the
E-bank application used in the mining examples.
Before we start, it is important to note that our measure is the Facility dimension.
This means that if we make a deviation definition in our OLAP Mining Wizard, but
leave out the Facility dimension or do not choose a single member from this
dimension, we will get no results returned.
In the Deviation Detection Wizard (Figure 4-9) we are allowed to specify a region
of the cube to be mined, also producing the subcube mentioned earlier. This is
done with two types of selection.
Member only: This selects only the member, and not its descendants.
Member and Descendants: This select the member from the level you
choose, and its descendant.
To make the selection of the members to mine, you mark the member on the left
side, expanding the tree by clicking the +, and then you choose one of the two
types of selection: “Member only” or “Member and descendants”. The selected
member now appears on the right side with an icon next to it showing what type
of selection that was made. Selected members can be removed from the right by
clicking the Remove button.
In this case we have initially chosen to mine the member “1. quarter of 2002” with
descendants, “Transactions” as member only, “PC-banking” as member and
descendants, and finally “Organizational unit” as member with descendants. We
move on to the next window and choose to view only the first top 10 results and
run the deviation detection right away.
In Figure 4-10 we can see the result set from our first mining run on the E-bank
application. By clicking the magnitude, we have selected and sorted the 10
results according to how great or small the difference between the expected
value and the actual value are.
In order to better understand the magnitude, let us take a look at Figure 4-11.
C
Magnitude 4
A B
Actual Expecting
15.000 5.000 50.000
bricks used bricks used bricks used
The first results show branch DEPT2187 in January 2002 had 121 transactions,
which in this case was expected to be higher, with a magnitude of 11. The same
branch is also represented as the second result, but for March 2002, with 500
transactions, which was expected to be lower, with a magnitude of 9.
After saving and running your mining session, the deviation definition appears as
a branch underneath the database (cube). The results from the analysis appear
below the deviation detection definition in the tree view as seen in Figure 4-10.
If we look into the results from this first scenario, where we mined for the
deviation for PC-banking customers in the first quarter of 2002, you will notice
that the two first results are from the same branch in the bank. But where the first
result is shown to be expected higher than it actually is, the second result is
shown to be expected lower.
Other deviations that might have been detected by OLAP Miner, but were not
selected, are also shown in red, but without the border. You have now the
possibility to investigate further by clicking the Spreadsheet button, and opening
the spreadsheet associated with either Excel or Lotus 1-2-3 as shown in
Figure 4-14.
Here it becomes obvious that Branch AFD2187 has performed below the
average. Notice that even though Branch AFD2187 has a low number of
transactions, two other branches have the same low numbers. However, their
deviations were not as significant as for Branch AFD2187, and therefore they are
not highlighted in the deviation viewer. For an explanation of how to associate
this with your favorite spreadsheet, see 4.6.2, “OLAP Miner client configuration”
on page 113.
When OLAP Miner decides whether a cell is exceptional or not, the calculation is
based on the standard deviation. This means that the surrounding cells that are
shown by the deviation viewer are used to calculate it.
We now select “Transactions” as member only, and the rest as member and
descendants: fourth quarter 2001, Netbanking, Equity trading, Organizational
unit.
The final step of the OLAP Miner Wizard shows a summary of members
selected. It also allows you to set the sensitivity of the detection algorithm
through the number of maximum deviations, as well as saving and running the
definition.
After running our deviation detection, we get the following results (Figure 4-16). If
we sort the results by magnitude, we can see that branch DEPT1705 occupies
the first 4 seats, with respective ratings of 13 for December and 9 for November
on both Netbanking and Equity Trading.
To look further into these results we will need to open the Deviation Viewer as
shown in Figure 4-17. Doing this gives us three different views of our results,
shown as tabs below the first result set:
1. Organizational Unit and Time
Here we view the Branch compared to the months in last quarter of 2001, and
the number of transactions covering both Net-banking and Equity trading.
Branch DEPT1705 is highlighted because of the high deviant score, where
the result was expected to be lower.
2. Facility and Time
This view shows us how Net-banking (F09) and Equity trading (F10)
transactions are distributed over the last three months of 2001. While
Net-banking grew significantly in December, Equity trading had no
transactions at all. This indicates other root causes, like failed data delivery or
this transaction segment has been removed from the application.
3. Organizational unit and Facility
Here we view the specific Branches broken down by number of transactions
for both Net-banking and Equity trading. Branch DEPT1705 stands out again
by having no transactions on Equity trading at all.
Figure 4-18 shows how the client and the server communicate, and displays the
result sets through Excel or Lotus 1-2-3 spreadsheets.
OLAP Miner
Deviation Spreadsheet
1 Command Line
Viewer Excel/123
(OMRundef) 9
8
ESSBASE
API
OMClient.cfg OLAP Miner
Client (OMClient)
2 TCP/IP
ESSBASE
Deviation API
File system
detection
.res files
kernel
7 6
Figure 4-18 OLAP Miner Server and client communication with DB2 OLAP Server
The step numbering in Figure 4-18 illustrates the flow of a mining process, as
follows:
Step 1: You will be using OLAP Miner from the OLAP Miner client main
window using the command line client (OMRunDef).
From here you log on to the DB2 OLAP Server.
Step 2: The DB2 OLAP Miner client (OMClient) uses the local OMClient.cfg
file and communicates through TCP/IP with the DB2 OLAP Miner server
component (OMServer).
The command line client (OMRunDef) also utilizes OMClient.cfg. It is also
possible for the OMRunDef to overwrite the settings by specifying parameters
directly on the command line.
Step 3 and Step 4: The DB2 OLAP Miner Server communicates and verifies
user ID and password to the OLAP Server application on the DB2 OLAP
Server through the Essbase API
Running and monitoring your OLAP Miner Server and client is fairly simple. The
OLAP Miner Server is started with the following command on the AIX that will
keep it running in the background, and write the server activities to the
omactiv.log log:
omserver.sh -b omactiv.log &
To find out how long a mining run lasted on a specific application, subtract the
start time from the end time shown in the .log file. If you have multiple OLAP
applications residing on different OLAP servers, you need to add these to your
OMServer.cfg file as shown in Example 4-2. Here the servers, cayman and sicily,
are added. For further information regarding options for the OMServer.cfg file,
see OLAP Miner User Guide V8.1, SC27-1611-00.
Example 4-2 OMServer.cfg file: client log verbose level set to high
<?xml version="1.0"?>
<OMServerConfig>
<olap_server alias ="cayman" address="cayman.almaden.ibm.com"/>
<olap_server alias ="sicily" address="sicily.almaden.ibm.com"/>
<port value = "1976"/>
<max_connections value = "-1"/>
<max_kernel_runs value = "2"/>
<power_user value = "CALC"/>
<clientlog path="/tmp/omactiv.log" max_size="1000000" verbose="HIGH"/>
<data_dir path="/olapfiles”/>
</OMServerConfig>
If you need to stop your OLAP Miner server on the AIX, use the kill command
(find the jobs running with the AIX command jobs, or by using ps -ef | grep
OMServer.jar).
The sync olap command, used from the OLAP Miner Command Console, is the
one command that is not self-explanatory. This command synchronizes the
OLAP Miner files with the current DB2 OLAP Server that is identified by
olap_alias. The user ID and password must be for a supervisor user for the DB2
OLAP Server that is identified by an olap_alias.
You can use the sync olap command when applications or databases are
deleted from the DB2 OLAP server system. The command ensures that all OLAP
Miner files have a corresponding application and database combination on the
DB2 OLAP Server system. It deletes any OLAP Miner files that do not have a
corresponding application and database combination. The log from running the
sync olap command is shown in Example 4-3.
Important: You can use the shutdown command from the console, but be
aware that if used outside the OLAP Miner Command Console, it can shut
down your AIX system.
Another way of doing Deviation detection is by running the definitions you already
created, from the command line, using the OMRunDef. Be aware that the name of
the application to run must not contain blanks, and if they do, you need to use
quotation marks around the name, for example, “home banking”.
OMRunDef gives you the advantage of creating a batch file for invoking multiple
OLAP mining runs, for example when the cube has been updated with new data.
This also gives you the possibility to start multiple mining runs from within the
process flow of an ETL tool, like IBM DB2 Warehouse Manager, Ascential
DataStage or ETI Extract.
$ARBORPATH/om_data/svralias/ApplName/DatabaseName/OMDeviationName
Example 4-6 Log file describing the subcube that was mined
Cube Name
/olap81/om_data/cayman/e-bank/e-bank/PC-banking/3312810277/par_27115
Number of dimension 2
Dimension 0 141
Hierarchy 0 number of levels 3
Level 0 number of members 413
Level 1 number of members 8
Dimension 1 429
Hierarchy 0 number of levels 4
Level 0 number of members 36
Level 1 number of members 12
Level 2 number of members 3
Figure 4-20 OLAP Miner XML files shown in the Web browser
xml.vfy: XML verify files. Used temporarily for verifying XML file.
.err : The .err files are error files from the OLAP mining process, and will
contain error messages if the mining run failed.
yyyy-mm-dd-hh-mm.res: The name of the deviation detection result file,
concatenated by full date down to minutes.
This feature makes it possible to deliver large scale Internet, DB2 OLAP Server
applications to large user communities, providing high availability, high
concurrency, and optimum query response time in enterprise-level environments.
We also describe how to install, configure, and start using Enterprise Services.
This information is presented in a step-by-step format for a better and more
detailed comprehension.
Enterprise
Services Console
(Java Client) Essbase JAVA API
Spreadsheet
HTTP, RMI, TCP/IP
Services or HOP
Analyzer v6.1.1
Enterprise
Services
Custom JAVA
client
OLAP RDBMS
Metadata source
Catalog ODBC/Native driver
Domain Storage
Enterprise Services is delivered with its own sample JAVA client programs to
provide a fast start, developing environment and you can also develop your
own custom client programs.
For more information about the supported protocols, see 5.2, “Enterprise
Services configurations” on page 124.
You must have DB2 OLAP Server cubes up and running and connected with
Enterprise Services. We will user the term OLAP JAPI in the following for any
kind of client access. To access information in the OLAP cubes through
Enterprise Services, you can use Essbase Spreadsheet Services, any client
application supporting the new JAVA API, or custom developed JAVA API
programs.
Olap
Server
Enterprise Olap
Services Server
Server
Olap
Server
Domain
Storage
Although the flat file domain storage is the default configuration, it is only
recommended for development and testing purposes. For a production
environment, you must configure a relational database such as DB2 UDB,
Oracle, or SQL Server to be used as domain storage.
For the examples and the case study used in this book to illustrate the
functioning of Enterprise Services, we use DB2 UDB v7.2.
These components work together with DB2 OLAP Server servers and Enterprise
Services client programs to provide the functionality of Enterprise Services
product.
O L AP
Serv er
Java A PI
HTTP
clien t
E nterprise O L AP
S erver
S ervices
Ja va A pp lication S erver S erver
O L AP
Java A PI
Server
EJB
clien t
O L AP
S erve r
J ava App lication S erver
Java A PI RDBM S
C orba
clien t Do m ain S to rag e
In the following sections you will find an overview of each of these supported
technologies, as well as some of their advantages and disadvantages.
Enterprise Services provides by default the TCP/IP configuration and the sample
programs are based in this technology. These sample programs can be a useful
starting point to simulate a client scenario, as well as to build a testing
environment and start using Enterprise Services.
OLAP
Server
Enterprise
TCP/IP
client JAPI Services OLAP
program libraries Server Server
OLAP
Server
Domain
Storage
Client programs that use this configuration usually are Java applets running in a
Web browser, as shown in Figure 5-5. However, the client program can also run
as a stand-alone Java program or servlet.
If the client program uses HTTP, Enterprise Services must run as a Java servlet
in a Java application server, which is not included with the product.
Web browsers
OLAP
Enterprise Server
HTTP Services
client JAPI Server
program libraries OLAP
Server
OLAP
Server
Dom ain
Storage
Essbase Spreadsheet Services, one of the end user tools you can use to access
information through Enterprise Services, connects to Enterprise Services Server
through HTTP. Figure 5-6 shows the architecture for Essbase Spreadsheet
Services.
Excel
Addin
Excel Applications
OLAP
Service
Server
User
Context
Repository
If you choose to use EJB technology, both client programs and Enterprise
Services must run as EJBs within a Java application server as shown in
Figure 5-7.
OLAP
Server
Domain
Storage
When you choose this kind of technology, you must develop the end user
programs that will have access to the information through Enterprise Services
and any other application that you want to run against the ES server.
OLAP
Server
Domain
CORBA ORBs Storage
CORBA client programs connecting with Enterprise Services usually provide the
fastest performance, depending on the size and complexity of the client program
logic. In these kinds of implementations, if the client program and the Enterprise
Services server are separated by a security firewall, the client program must use
tunneling protocols to communicate with Enterprise Services Servers. These
client programs must be developed in order to work with CORBA and Enterprise
Services.
5.3.2 Clustering
Enterprise Services cluster is a logical definition for a group of DB2 OLAP
Servers running copies or replicas of the same DB2 OLAP Server application
and database to serve all users requests as an single logical unit.
Note: A cluster requires a full (identical ) copy of the same DB2 OLAP Server
Application and Database. On the following, we are using the term copy or
replica for the same purpose.
Restrictions:
On this tested version of Enterprise Services, a cluster only support DB2
OLAP Server Version 8.1 (Hyperion Essbase 6.5)
Enterprise Services does not support multiples installations of DB2 Sever
on the same machine
Enterprise Services only support read access to DB2 OLAP Server
applications and databases
The main benefits of implementing a cluster of DB2 OLAP Server databases are:
Linear scalability by adding new physical resources to the cluster:
– The scalability of the solution supports linear growth. As the user
community grows or become more active (the workload increase), then
you just need to add more DB2 OLAP Servers to the cluster to improve
response time.
Workload balancing by distributing the load across multiple physical
resources:
– Distributing the workload across multiple servers, provides consistent
response time for the users requests and enables you to implement large
and enterprise OLAP solutions over the Web.
Eliminate a single point of failure in a system:
– If a failure occurs in one application and database in the cluster, Enterprise
Services automatically tries to re-start the application and database.
– If an application and database is disabled by the OLAP administrator or by
a system failure, Enterprise services re-directs the user’s requests to other
active databases in the cluster.
– If a failure occurs in one of the DB2 OLAP servers installations in the
clusters, the system is able to continue processing by re-directing users
requests to others DB2 OLAP Servers in the cluster.
Figure 5-9 shows a basic sample scenario of DB2 OLAP Server cluster on two
different AIX servers (cayman and sicily).
ES - Clustering Scenario 1
AIX Server - Cayman
DB2 OLAP Server
Sample
Basic
Users Windows 2000/NT or Unix
Group
1 Enterprise Services Server
TCP/IP e-Bank
e-Bankdb
e-BankCluster
Cluster
Users
Group e-Bank
2 e-Bankdb
TCP/IP
Demo
Basic
ES ADM Console
Figure 5-9 Clustering sample scenario: single OLAP database per server
Notice that:
The OLAP application e-Bank and database e-Bankdb is defined on both
servers (Cayman and Sicily) and they are identical (have the same
information).
In general, for a basic cluster implementation, you have only one copy of the
same application and database within the same DB2 OLAP server installation.
For some cases, where you have resources (memory, CPU) available, you might
decide to have multiple copies of the same application and database within the
same DB2 OLAP Server installations that are part of the cluster (see
Figure 5-10). The outline must be the same (hierarchies, members, dimensions),
but the application/database name can be different.
ES - Clustering Scenario 2
AIX Server - Cayman
DB2 OLAP Server
Sample
Basic
Users Windows 2000/NT or Unix
Group
Enterprise Services Server
1
TCP/IP e-Bank
e-Bankdb
e-BankCluster
Cluster
OLAP Sessions
OLAP TCP/IP AIX Server - Sicily
Requests
DB2 OLAP Server
via Java API
Users
Group e-Bank e-Bank1
e-Bankdb e-Bankdb
2
TCP/IP
Demo
Basic
ES ADM Console
Figure 5-10 Clustering sample scenario: Multiple databases copies per server
When making copies of the same OLAP application and database within the
same DB2 OLAP Server installation, you need to have a balance between the
benefits and the administration costs of having multiple copies of the same
application and database.
The benefit of having copies of the same OLAP application and database within
the same DB2 OLAP Server installation can be a better utilization of memory and
CPU on a large SMP server.
Enterprise Services uses this set of login sessions to process users requests
(retrieve data) to a specific DB2 OLAP Server application and database.
When Enterprise Services receives a user request to retrieve data from an OLAP
application and database, Enterprise Services tries to re-use existing
connections that are already open to DB2 OLAP Server. New connections are
only open when all existing connections are in use.
To take advantage of using the connection pool, you need to implement a 3-tier
environment, installing Enterprise Services to act as an application server for all
clients requests to DB2 OLAP Server. The client application should be defined
using the Java API provided by Enterprise Services.
Figure 5-11 shows a basic sample scenario for a connection pool configuration.
In this example, we have two connection pool definitions:
1. The connection pool e-BankConnPool has two OLAP connections open to
DB2 OLAP Server on Cayman for the application e-Bank and database
e-bankdb.
2. The connection pool OnlineIConnPool has two OLAP connections open to
DB2 OLAP Server on Sicily for the application e-Bank and database
e-bankdb.
e-Bank
Users Windows 2000/NT or Unix
e-Bankdb
Group
Enterprise Services Server of s
ol
1 Po ction
e
TCP/IP nn Sample
Co
Connection Basic
Pool
e-BankConnPool
OLAP Sessions
OLAP AIX Server - Sicily
TCP/IP
Requests
Connection DB2 OLAP Server
via Java API
Pool
OnlineIConnPool
Users Co Poo Demo
nn l o
Group ec f Basic
tio
2 ns
TCP/IP
OnlineI
Onlinedb
ES ADM Console
To take full advantage of all the features provided by Enterprise Services via a
connection pool and cluster, you need to define a cluster in your connection pool.
Figure 5-12 shows a sample scenario of a cluster and connection pool.
Sample
Basic
Users Windows 2000/NT or Unix
Group
Enterprise Services Server
1
AP e-Bank
TCP/IP 2 OL ons
necti e-Bankdb
Con
e-BankConnPool
Connection Pool
e-BankCluster
Pool of Maximum
4 Connections
Cluster
OLAP OLAP Sessions
TCP/IP AIX Server - Sicily
Requests
via Java API DB2 OLAP Server
2O
Users Con LAP
nec e-Bank e-Bank1
tion
Group s
e-Bankdb
e-Bankdb
2
TCP/IP
Demo
Basic
ES ADM Console
When combining connection pooling and clustering, you will have all the benefits
from connection pooling and from clustering, like improved availability, consistent
query response time, workload balance, support for application and server
failover, optimization of the DB2 OLAP connections.
On the sample scenario for cluster and connection pool (see Figure 5-12), there
are 3 identical copies of the OLAP application e-Bank and database e-BankDB,
the server Cayman has one copy of this Cube and the Server Sicily has two
copies of the same Cube.
Also, on the sample scenario for a cluster and connection pool (see Figure 5-12),
there is a representation for two OLAP (Sessions) from the Enterprise Services
Server to each one of the OLAP Servers (Sicily and Cayman), and these
connections are shared by all user requests done via Java API to retrieve data
from the application e-Bank and database e-BankDB.
The workload (clients requests) will be distributed to both servers (Sicily and
Cayman) and to all DB2 OLAP Server Applications and Databases that are
defined on the cluster (for example, Cayman/e-Bank/e-BankDB,
Sicily/e-Bank/e-BankDB, Sicily/e-Bank1/ e-BankDBB).
Enterprise Services receives all the clients requests to access one specific DB2
OLAP application and database and distribute equally across all DB2 OLAP
Servers that belong to the connected cluster and that are running copies of the
requested application and database.
Enterprise Services distributes the client requests to the servers in the sequence
that each DB2 OLAP Server, Application, and Database is defined in the cluster
property definition Service component names. The reference for one specific
DB2 OLAP Server, Application, and Database can be defined more than once on
the same cluster. For more detailed information on the definition of Service
component names cluster properties, see 5.4.7, “Creating clusters” on
page 195.
Figure 5-13 shows a sample scenario for load balance. In this scenario, there is a
connection pool (e-BankConnPool) defined to a cluster (e-BankCluster). The
connection pool e-BankConnPool is defined with a minimum of 2 and a
maximum of 4 connections. When Enterprise Services activates this connection
pool, two sessions are established to the OLAP Servers (Sicily and Cayman).
Demo
Basic
Users Windows 2000/NT or Unix
Group
Enterprise Services Server
1
AP e-Bank
1 O L ons
Minimum
Maximum
TCP/IP
e-BankConnPool
ti e-Bankdb
Connection Pool
nec
Con
e-BankCluster
Cluster
2
OLAP Sessions
4
OLAP
TCP/IP AIX Server - Sicily
Conne ctions
Connections
1O
Users Con LAP
ne c
tion
Group s e-Bank
2 e-Bankdb
TCP/IP
Demo
Basic
ES ADM Console
Figure 5-13 Workload balance: Initial sessions established to DB2 OLAP Servers
The sample scenario defined in Figure 5-13 represents an environment with few
users requesting access to the cluster e-BankCluster. In this scenario, only two
connections (sessions) from Enterprise Services Server to DB2 OLAP Server on
Cayman and Sicily are required to supply the requests from a few client requests.
In Figure 5-17, on the log for the console, you can see the number of connections
opened to the cluster e-BankCluster.
To see the number of connections opened on all DB2 OLAP Servers that are part
of the cluster, use the Application Manager (AM) that is the client application for
DB2 OLAP Server:
On our sample scenario for a connection pool and cluster, in Figure 5-19 and
Figure 5-20, you can see the initial connections opened for both servers (Sicily
and Cayman).
The additional connections are distributed to all DB2 OLAP Servers that are part
of the cluster, and the distribution is done in order that they are defined on the
cluster definition.
Demo
Basic
Users Windows 2000/NT or Unix
Group
Enterprise Services Server
1
AP e-Bank
TCP/IP 2 OL tions
Minimum
Maximum
DemoBasicCluster
n nec e-Bankdb
DemoBasicPool
Connection Pool
Co
Cluster
2
4
OLAP OLAP Sessions
TCP/IP AIX Server - Sicily
Connections
Connections
Many Client requests Requests
via Java API DB2 OLAP Server
2O
Users Con LAP
nec
tion e-Bank
Group s
2 e-Bankdb
TCP/IP
Demo
Basic
ES ADM Console
Balanced workload
We define balanced workload as a situation where the client requests to access
a specific DB2 OLAP Server application and database are equally distributed to
all DB2 OLAP Servers that are part of the cluster.
In an OLAP Server installation, you can have multiple cluster definitions for the
same DB2 OLAP Server application and database, and also, you can have
different cluster definitions for a different application and database.
In this case, if you want to equally distribute the requests across all servers
defined on all clusters, all the cluster definitions need to have a single definition
for a combination of DB2 OLAP Server, Application, and Database. See the
example shown in Figure 5-22.
Figure 5-22 has a sample definition for the Service component names field of the
cluster definition properties without repetition of DB2 OLAP Server, application
and database. In this example, Enterprise Services distributes all the client
requests equally to both servers (Sicily and Cayman), and the distribution will be
in the following order:
1. Cayman/e-Bank/e-BankDB
2. Sicily/e-Bank/e-BankDB
Unbalanced workload
To give a different workload to a specific OLAP Server, the definition for a
combination of DB2 OLAP Server, Application, and Database needs to appear
repeated in the cluster properties definition Service component names.
See the example in Figure 5-23.
In the example shown in Figure 5-23, the server Sicily will receive 3 times more
requests or workload than the server Cayman, because on the cluster definition,
the application e-Bank and the database e-BankDB appears defined 3 times for
the server Sicily and only once for the server Cayman .
In the example shown in Figure 5-23, Enterprise Services distribute all the client
requests not equally to both servers (Sicily and Cayman) and the distribution will
be in the following order:
1. Cayman/e-Bank/e-BankDB
2. Sicily/e-Bank/e-BankDB
3. Sicily/e-Bank/e-BankDB
4. Sicily/e-Bank/e-BankDB
Another way to give more workload to a server in a cluster configuration, is by
doing copy of the same application in one of the servers. The example on
Figure 5-24, the server Sicily has two OLAP applications and database
(e-Bank/e-BankDB and e-Bank1/e-BankDB).
Demo
Basic
Users Windows 2000/NT or Unix
Group
1 Enterprise Services Server
AP e-Bank
Minimum 2 Connections
2 OL tions
Maximum 6 Connections
TCP/IP
DemoBasicCluster
ne c e-Bankdb
DemoBasicPool
Connection Pool
C on
Cluster
OLAP OLAP Sessions
TCP/IP AIX Server - Sicily
Many Client requests Requests
via Java API DB2 OLAP Server
4O
Con LAP
nec
Users tion
s
Group e-Bank e-Bank1
2 e-Bankdb e-Bankdb
TCP/IP
Demo
Basic
ES ADM Console
In the sample scenario shown in Figure 5-24, the Server Sicily can receive twice
as many requests as the Server Cayman. The cluster definition for this scenario
is shown in Figure 5-25.
To invoke the runtest command shell, you need to follow these steps:
Start the Command Shell Console.
Signon to the Enterprise Services Server, for example, signon system,
password.
Perform runtest numIteration, numThreads, orbType, port, prefHost,
olapSvr, user, password, useConnPool, connPerOp, useCluster,
readOnly.
Where:
numiteration = Number of client request to be issued
numThreads = Number of threads for each iteration
orbType = Protocol (as tcpip or http, and so on)
port = tcpip port configured for Enterprise Services. Default is 5001
prefHost = Enterprise Services Server hostname
olapSvr = The DB2 OLAP Server
user = Enterprise Services User
password = Enterprise Services User Password
useConnPool = Specify true to use connection pool
connPerOp = Define true to use connection pool per operation
userCluster = Define true to use cluster
readOnly = Define true for read only operation
Note: This feature is currently not supported for outline operations and cube
view operations, and is supported for all the rest of the functionality. For
information on JAPI operations (as cube view, outline and other functions),
see the Enterprise Services online documentation, Essbase JAPI Reference.
Figure 5-26 shows a sample scenario with a connection pool defined to support
a maximum of four connections to a cluster of two DB2 OLAP Servers (Sicily and
Cayman). In this scenario, the connection pool named as e-BankConnPool,
open at the maximum four connections via the cluster e-BankCluster that
contain two DB2 OLAP Servers on AIX (Cayman and Sicily).
Sample
Basic
Users Windows 2000/NT or Unix
Group
Enterprise Services Server
1
AP
TCP/IP 2 OL tions e-Bank
nec e-Bankdb
Con
e-BankConnPool
Pool of Maximum
Connection Pool
e-BankCluster
4 Connections
Cluster
OLAP OLAP Sessions
TCP/IP AIX Server - Sicily
Requests
via Java API DB2 OLAP Server
2O
Users Con LAP
nec
tion
Group s e-Bank
2 e-Bankdb
TCP/IP
Application Demo
Failure Basic
ES ADM Console
Sample
Basic
Users Windows 2000/NT or Unix
Group
1 Enterprise Services Server
AP e-Bank
TCP/IP 2 OL tions
Pool of Maximum
ne c
e-BankConnPool
e-Bankdb
Connection Pool
e-BankCluster
4 Connections
Con
Cluster
OLAP OLAP Sessions
TCP/IP AIX Server - Sicily
Requests
via Java API DB2 OLAP Server
2O
Users Con LAP
ne c
tion e-Bank
Group s
e-Bankdb
2
TCP/IP
Application Demo
Re-Start
Basic
ES ADM Console
Sample
Basic
Users Windows 2000/NT or Unix
Group
1 Enterprise Services Server
TCP/IP LAP e-Bank
2 O tions
e-BankConnPool
Connection Pool
e-Bankdb
e-BankCluster
onnec
C
Pool of Maximum
4 Connections
Cluster
OLAP OLAP Sessions
TCP/IP AIX Server - Sicily
Requests
via Java API DB2 OLAP Server
2O
Users Con LAP
nec
tion
Group s e-Bank
2 e-Bankdb
TCP/IP
Server
Failure Demo
Basic
ES ADM Console
Sample
Basic
Users Windows 2000/NT or Unix
Group
Enterprise Services Server
1
TCP/IP e-Bank
AP
4 OL tions e-Bankdb
e-BankConnPool
Pool of Maximum
Connection Pool
nec
e-BankCluster
4 Connections
Con
Cluster
OLAP OLAP Sessions
TCP/IP AIX Server - Sicily
Requests
via Java API DB2 OLAP Server
Users
Group e-Bank
2 e-Bankdb
TCP/IP
Re-Route
Requests Demo
Basic
ES ADM Console
Notice that all the connections were automatically re-routed to the Cayman
Server because the Sicily Server is not available.
Excel
Sample
Addin and VBA
Basic
HTTP Windows 2000/NT or Unix
SOAP Middle Tier Enterprise Services Server 6.5
AP e-Bank
2 O L tions
Web Services
Pool of Connections
SpreadSheet
e-BankConnPool
nec e-Bankdb
Connection Pool
Co n
e-BankCluster
Java API
Essbase
Cluster
OLAP
Sessions
TCP/IP AIX Server - Sicily
DB2 OLAP Server 8.1
Developed
Applications 2O
Con LAP
nec
using Java API tion
s e-Bank
TCP/IP
HTTP e-Bankdb
IIOP
RMI Directly
Access
Demo
Basic
Analyzer 5.0,
Analyzer Version Excel Addin,
6.1.1 123 Addin,
Brio, BO, etc.
Restriction: When using a connection pool, the access to the DB2 OLAP
Server application and database is done by a single user (the user who
opened the connection to DB2 OLAP Server). It means that on all DB2 OLAP
Server that are part of the cluster, you need to define only a single user with
read access to the OLAP applications.
5. Run your client application or create your own client application using the
DB2 OLAP Server Java API.
To copy applications and databases using the graphical console, execute the
following steps (see example in Figure 5-31):
Select Start —> Programs —> Essbase Enterprise Services Console.
Signon to your Enterprise Services Domain.
Expand the OLAP Servers tree.
Select the Source DB2 OLAP Server Application and Database.
Right-click the Source Application
Select Copy
Type the Destination Application Name and Destination Server.
Press Enter.
You can create your own programs to make copies of the OLAP applications and
database, or you can use the supplied sample programs from Enterprise
Services. These sample Java programs are installed automatically with the
Enterprise Services installation in the sample folder under the Enterprise
Services installation directory.
Figure 5-32 Copy OLAP application across servers via Command Shell
There are several ways to synchronize data on DB2 OLAP Server. You can
choose to use one of the following approaches:
Use the Partitioning feature of DB2 OLAP Server to create replicated
partitions of the served application on each DB2 OLAP Server:
– DB2 OLAP Server Partitioning feature: This enables you to create
replicated partitions of a database over multiple DB2 OLAP Servers. By
creating partitions that will mirror each the servers defined in a cluster, you
can synchronize data across multiple DB2 OLAP Servers.
For information on how to implement DB2 OLAP Server Partitions, see the
DB2 OLAP Server Database Administrator’s Guide.
Use DB2 OLAP Integration Server to load data onto each server on the cube:
– DB2 OLAP Integration Server: This enables you to load data into DB2
OLAP Serves databases quickly and easily. To synchronize DB2 OLAP
Servers databases using DB2 OLAP Integration Server, load and update
your master application and database on the cluster, and also use it to
load and update the other mirror application and database for the other
DB2 OLAP Servers installations in the cluster.
Export data from the master application and database and load it into each
application and database on all DB2 OLAP Server installations in the cluster.
To synchronize data using export and import functions, you can use one of
the following methods:
– Export and Import features: DB2 OLAP Server provides a basic solution
for synchronizing data between servers via data export and import
features. You can export the data from a master application and database
and import into the other servers in the cluster. For information on how to
export and import data, see the DB2 OLAP Server Database
Administrator’s Guide.
– Enterprise Services Java API: Enterprise Services provides data
synchronization via Command Shell. This command uses the JAPI
methodIEssDomaing.SyncCubeReplicas().
– On the Enterprise Services installation directory, there is a sample
program file named SyncCubeReplicas.java supplied with Enterprise
Services installation.
If you have a large number of users to migrate, you might consider using a
command shell supplied on Enterprise Services that can load users names and
passwords from a DB2 OLAP Server and push them onto another DB2 OLAP
Server.
Note: When using connection pool and cluster, you only need to define — on
each DB2 OLAP Server installation that is part of the cluster — a single user
with read access to all applications and databases specified on the cluster.
Another way to keep users and passwords synchronized on all DB2 OLAP
Servers in a cluster is by creating a script using MaxL or ESSCMD that define the
new users/password, and run this script in all DB2 OLAP Servers that belong to
the Enterprise Services cluster.
In this description, we use a TCP/IP environment that is one of the most common
architectures when working inside enterprise firewalls. We also use the sample
Java programs that come with Enterprise Services to demonstrate connection
pooling, clustering, and request workload balancing. These programs are
designed to provide a fast start using and testing Enterprise Services.
Set up the domain Create a relational database 5.4.2, “Setting up the domain
storage storage” on page 171
Start Enterprise Start the Enterprise Services Server “Starting and stopping the
Services Enterprise Services server” on
page 176
– Create groups
Using the Sample Configuring the runsample script file “Configuring the runsamples
Programs script file” on page 209
Also, during the installation process, check the tips described in the following
sections for Windows and AIX, respectively.
Note: If you don’t have Java, you may download a free copy from:
http://java.sun.com
You install Enterprise Services server software on UNIX platforms by running the
self-extracting installation program located on the DB2 OLAP Server Enterprise
Services CD-ROM. This program requires a system with the Java Runtime
Environment installed and graphical windowing software such as X-Windows or
Motif.
To install Enterprise Services, use the root user. Make sure that you are the
owner and that you have execute permissions on the installer program. If you are
not the owner and do not have permissions of executing on the file, you will not
be able to install Enterprise Service. Change the permissions by executing the
chmod and the chown commands in AIX.
Then follow the prompts and provide any information requested. For more
information, refer to the Enterprise Services Installation Guide.
Note: The installation program creates some files in the tmp directory. Make
sure you have enough space in this directory. Otherwise, the installation
process finishes with an error.
Just as with Windows systems, you will need to choose if you want to install
Client and Server in the same machine, or if you only want the client.
Choose the best option for your environment, and continue with the installation.
Once the installation is finished, you can start configuring Enterprise Services
domain storage.
Although you can use other relational databases as a domain storage, in the
following examples we use DB2 UDB v7.2.
Optionally, you may have to update other variables in the essbase.properties file.
Figure 5-35 shows a couple of variables that may be interesting. These variables
are the Location Variable and the logging variable, located in the file,
essbase.properties.
The Location Service variable allows this Enterprise Services server (and other
ES servers on the local intranet) to be located and listed by the Enterprise
Services Console. To enable this function, set the variable to true.
service.location.start=true
If you need more information concerning the essbase.properties file, refer to the
Essbase Enterprise Services Properties File Documentation.
Once the server has been started, it will appear in a command window like the
one shown in Figure 5-38. Leave it open.
2. To stop Enterprise Services server, click the command window and press
Enter. At the prompt, type exit and press Enter.
Attention: You can have more than one domain created in your environment,
but only one domain can be a root domain, which means that it cannot be
contained within another domain. The root domain is created automatically
with Enterprise Services installation, and the default name is essbase.
2. To see all the commands available in the Command Shell, type help. It is
possible to issue the commands by typing the command or the number
associated with it. See Figure 5-42 for a detailed list of all commands
available in Enterprise Services Shell.
3. Start the Command Shell by going to Start -> Programs -> IBM DB2 OLAP
Server 8.1 -> Essbase Enterprise Services -> Enterprise Services Shell.
a. Type signon or type the number 4.
b. Provide the information to the signon command.
4. To stop the command shell:
a. At the prompt, type exit or quit, or type the number 5.
b. Press Enter.
Note: When you start the Enterprise Services Server, the window displayed is
the Command Shell itself. But you can also start the Command Shell as a
program separate from the Enterprise Services server. Just follow the steps
explained above.
3. When the Please enter a subdomain name screen appears, enter a name
for the domain you are creating, for example, db2domain.
4. Click OK.
Now, the new domain db2domain is created. Figure 5-44 reflects the change.
Look on the Properties tab in the workspace area, in the right part of the screen.
The new domain has no users, groups, OLAP servers, Enterprise Services
server or domains yet within it. As long as you start creating objects in these
domains, you will see these parameters updated.
This objects will be created in the steps described later in this chapter. The
objects in the Navigation Panel, on the left side of the screen, are objects that
belong to the essbase domain.
Although the objects created in this chapter belong to the root domain essbase,
you can create them in the domain just added or in a new domain.
In order for your users to use Enterprise Services clustering and connection
pooling, these users must be defined in Enterprise Services server.
These users do not necessarily have access to DB2 OLAP Server servers. In
order to have access to the OLAP server from the console and control it, these
users must be defined in DB2 OLAP Server. Users have access to the
applications in Enterprise Services based on the rights that they have on DB2
OLAP Server.
Creating users
There are two options for creating users:
From the Graphical Console
From the Enterprise Services Command Shell
3. Enter the name of the user you want to create, for example, db2user
4. Click OK.
The user db2user is created and you can see it by clicking on the + sign next to
Users in the Accounts folder.
Add as many users as you want, following the same procedure. Figure 5-46
shows the new users in the graphical console.
For more information about the commands you can perform, see the Installation
Guide.
Now, the console shows the group recently created. If you right-click group_OLAP,
you will see the options for the group, such as Manage Group. This option
enables a panel to add or remove users from the group.
You can also add users to a group by dragging and dropping the user name you
want to move, onto the selected group.
Now, in the Navigation Panel, the group has its own users.
Figure 5-50 shows the new Enterprise Server. Check to see that the server does
not have yet a connection pool or a cluster defined. Later in this chapter, the
creation of connection pools and clusters will be explained. After their creation,
those objects must be added in the Enterprise Server properties tab.
If you want to make a change in any of the Enterprise Server settings, like the
description, just click the line to change in the column value and make the
change. Then, click Save and Refresh, to reflect the changes in the domain
storage immediately.
There are some changes that are not reflected immediately, like the connection
pool and cluster. In order to take advantage of these definitions you must shut
down Enterprise Services Server and start it up again. For more information, see
5.4.3, “Starting Enterprise Services” on page 175 and 5.4.8, “Creating
connection pools” on page 199.
To define a DB2 OLAP Server you must issue one of the following options:
Add an OLAP Server from the graphical console.
Add an OLAP Server from the command shell.
Now the OLAP Server is added. Click the server and you are connected to it.
Note: In order to get a connection to the OLAP Server that you are defining,
the user name which is used to connect to the Enterprise Services Console
also needs to have access to the OLAP Server.
Click the + sign besides the name of the server. The list shows the different
applications in that server which the user is entitle to see. Continue clicking the +
sign, and the databases, dimensions, and members are listed. Figure 5-51
shows the OLAP cube that has been added to the Navigation Panel.
‘
Once connected, Enterprise Services lets you see the configuration parameters
for that application in the Workspace area.
Then, if you click the + sign for the database, the outline appears. Now it is
possible to look at the dimensions in that outline.
If you want to analyze information stored in that cube, right-click the name of the
database or cube and create a cube view. A cube view allow you to analyze
information from the cube depending on the rights the user has. Once the cube
view has been created, it is possible to analyze information by double-clicking in
the dimensions or by using the menu bar.
Now, if you look the graphical console it shows the new OLAP server.
There are several commands related to DB2 OLAP Server. See Table 5-4.
This functionality of Enterprise Services must be used through the Java client
programs developed. If the client programs do not take advantage of the
clustering and the connection pool functionality, you are not using the main
functions of Enterprise Services. You must specify in your client programs which
cluster or connection pool to use.
Now the cluster must been added in the Console Navigation Panel:
1. Click the name of the cluster. The Properties for that cluster will appear. You
need to modify the Service Components Name to declare the cubes that will
be part of the cluster. See Figure 5-54.
2. Click the box next to Service Components Names and add the cubes to be
part of the cluster in the format olapSvrName/appName/CubeName, for
example, Cayman/e-Bank/e-BankDB;Sicily/e-Bank/e-BankDB;
Sicily/e-Bank1/e-BankDB. Complete this box with every application that you
need to participate in the cluster. Separate them with a ;. Figure 5-55 shows
these definitions.
Figure 5-55 shows the cluster enabled in the Enterprise Services server.
After creating the cluster, you must enable it for Enterprise Service to use it. To
enable it, go to the console, and follow the steps explained in the previous
section, “Creating a cluster through the graphical console” on page 195.
There are other commands related to the creation of cluster in the Command
Shell. See Table 5-5.
In this part of the procedure, you can add a cluster to be used with the
connection pool, even though this is not necessary. For our example, we will
define the cluster created in the previous section. There are certain parameters
that need to be define is you are going to use a cluster.
1. To include a cluster to be used by a connection pool, define these
parameters:
– Service Component Name: with this parameter, you specify a cluster
previously defined to be used in the connection pool, for example,
e-BankCluster.
– Service Component is Cluster: this variable enables the connection pool to
use the cluster, check it.
– User name: this is a user that the connection pool uses to create the
connections to the DB2 OLAP Server. This user must have access to
OLAP Server. The rights of this user are the rights that all users will have
when using this connection pool, for example, system.
– Password: password of the user, in this case password
– Initial capacity: connections opened initially, for example, 2.
– Maximum capacity: maximum connections allowed to open, for example,
4.
– Capacity increment: incremental value used to open connections. This
value varies between the initial capacity and the maximum capacity. For
example, 1
– Allow everyone: by enabling this variable every user defined in Enterprise
Services will have rights to use the connection pool.
– Allowed users: instead of allowing everyone, you can specify the users
that can use the connection pool
– Allowed groups: instead of specifying users you can specify the groups of
users that can use the connection pool.
Figure 5-58 shows the parameters needed to define a connection pool that uses
a cluster.
Once you have created and configured the connection pool, it must be declared
or enabled in the Enterprise Services server.
1. Click the Enterprise Services server and add the connection pool in the
Enabled connect pool names box. See Figure 5-60 for more details.
2. Click Save.
The connection pool is created. Now, client Java programs can make use of the
connection pool whether it uses the cluster or not.
The commands related to connection pools that you can execute from the
Command Shell are shown in Table 5-6.
The runsamples program can only run one Java program at a time. It is
necessary to modify this parameter in the runsamples file or to create another
script to execute the program you want to test.
1. Edit the runsamples file.
2. Look for the sentence that begins with “echo step 2” and change the name
of the program to be executed, in the two lines as shown below, for example:
echo Step-2: Ready to run CrateConnPoolAndCluster example ...
pause
%JAVA_HOME%\bin\java com.essbase.samples.CreateConnPoolAndCLuster %USER%
%PASSWORD% %DOMAIN% %EES_SERVER% %ORB% %PORT% %OLAP_SERVER%.
3. Save the file.
Follow the same procedure to comment other lines instead of the ones shown
before.
In Figure 5-61, you can see that the two connections defined in the connection
pool have been opened and that they are using the defined cluster. Go to DB2
OLAP Server, and check the connections opened to the cubes. There must be at
least one connection to each cube defined in the cluster.
Try to modify the parameters for the connection pool, and specify to open a
minimum of 9 connections and check DB2 OLAP Server again. DB2 OLAP
Server shows the connections for the e-Bank and e-Bank1 applications in the
Sicily server. As shown in Figure 5-62, each application has three connections,
and the rest have been made to the e-Bank application in Cayman.
For more information about the available classes and methods in the DB2 OLAP
Server JAPI, see the online DB2 OLAP Server JAPI Reference in the Enterprise
Services DOCS directory.
To set up the runsamples script file to work with your environment, follow these
steps:
In the ESS_ES_HOME\samples\japi directory, locate the file named
runsamples script (.cmd on Windows platforms, .sh on UNIX platforms).
Open the file in a text editor.
Verify that the ESS_ES_HOME is pointing to the directory where your
Enterprise Services is, for example, c:\ibm\db2olap\ees.
Verify that the variable JAVA_HOME points to a supported version of Java
Runtime Environment. You must update this variable with a full path to the
Java installation, for example
set JAVA_HOME= “C:\ibm\db2olap\ees\jre”
For the example, we are using the JAVA installation that comes with
Enterprise Services.
Replace the variable values for user, password, domain, EES_SERVER, and
OLAP_SERVER as necessary to suit your environment, for example:
set USER=admin
set PASSWORD=password
set DOMAIN=essbase
set EES_SERVER=9.1.151.16
set OLAP_SERVER=sicily
The user must have access to OLAP Server and must be created in the
Enterprise Services server. Figure 5-63 shows the environment variables in
the runsamples file.
Note: If you use the SampleBasic, the DemoBasic, and Demo2Basic (which is
a copy of Demo), create a user named “admin” with a password named
“password”, and have DB2 OLAP Server and Enterprise Services on the same
machine, you do not need to modify the default setting of the sample client
program.
4. If you want to run the DataQuery program using a connection pool, follow
these steps:
a. Uncomment the following line:
cv = dom.openCubeView("Data Query Example", "demoBasicPool", true);
b. Change the second parameter with the name of the connection pool you
wan to use, for example:
cv = dom.openCubeView("Data Query Example", "e-bankconnpool", true);
c. Comment the following line (//)
cv = dom.openCubeView("Data Query Example", s_olapSvrName, s_appName,
s_cubeName, true, true, true, true);
Figure 5-65 shows the result of the execution of the DataQuery program.
This program is only one of the many sample programs that comes with
Enterprise Services. Table 5-7 shows the most commonly used sample
programs and a description of them.
For more information about the available classes and methods in the DB2 OLAP
Server JAPI, see the online DB2 OLAP Server JAPI Reference in the Enterprise
Services DOCS directory.
This functionality enables the possibility of starting and stopping DB2 OLAP
Servers and Enterprise Services Servers from the graphical console. This task is
optional, but recommended for an easier administration from Enterprise
Services. For a complete description of how to configure remote activation, refer
to the Enterprise Services Installation Guide.
This step must be done after the standard installation on Windows NT. Then you
must reboot you machine effect all the changes made to the Path and Classpath
variables. This task is optional.
Note: A master cube, is the principal cube in the environment. You load and
calculate the master cube and then you may full copies of that cube.
For further information, see “Synchronizing data across a cluster” on page 217.
Enterprise Services provides a command for loading OLAP Server users into its
domain storage. For more information see “Synchronizing users and security
across a cluster” on page 217.
The cube view is a temporary view. When you disconnect from the server, you
lose this view. But you can create it again as soon as you reconnect to Enterprise
Services.
Part 4 Advanced
administrative
functions
In this part of the book, we provide practical examples and detailed guidelines to
implement and use additional advanced administrative functions of DB2 OLAP
Server version 8.1, such as these:
Parallel calculation, load, and export functions
The Administration Services feature
Additional advanced functions:
– Direct I/O
– Security Migration Tool
– Multiple OLAP agents on the same machine
– Custom Defined Functions
DB2 OLAP Server version 8.1 allows these critical operations — calculation,
load, and export — to run in multi-threaded mode.
The parallel functions of DB2 OLAP Server improve performance and reduce the
batch window to build the DB2 OLAP Server cube or to export the DB2 OLAP
Server databases.
DB2 OLAP Server version 8.1 allows these critical operations — calculation,
load, and export — to run in multi-threaded mode.
The parallel functions of DB2 OLAP Server improve performance and reduce the
batch window to build the DB2 OLAP Server cube or to export the DB2 OLAP
Server databases.
Figure 6-1 shows the differences between serial and parallel calculation in a
machine that has two CPUs. In the Parallel calculation example, we are
executing the calculation process using two threads.
C ALC P AR A LLE L 2
waiting
task5
or
SE T C ALC P AR A LLE L
2
Th
d1
waiting
rea
task4
r ea
d2
Th
O n ly 1
waiting
waiting
task4
task6
T h re a d
waiting
task3
task4
waiting
waiting
task5
task3
waiting
task2
O p e r a t in g S y s t e m O p e ra tin g S y s te m
e x e c u tin g : ta s k 1 e x e c u tin g : ta s k 1 , ta s k 2
Figure 6-2 represents the four major steps DB2 OLAP Server performs during a
calculate process in parallel.
1 no Execute
Will the
parallel mode calculation in
improve serial mode
performance?
yes
25
3
Analyze tasks that can Operating System
run concurrently and
break them in subtasks.
d1 cpu1
ea
4 20 th r
1 task
tasks tasks thr
e ad2
cpu2 cpu3
4
subtasks
By default, DB2 OLAP Server uses the last sparse dimension in an outline to
identify the subtasks (tasks that can be performed concurrently). You can
also enable DB2 OLAP Server to use additional sparse dimensions in the
identification of tasks for parallel calculation using the CALCTASKDIMS
essbase.cfg setting (or SET CALCTASKDIMS calculation command). See 6.1.5,
“Identifying concurrent tasks for parallel calculation” on page 234 for more
information.
For this example, let us suppose that the database will be calculated using
only the last sparse dimension in the outline to identify the subtasks
(CALCTASKDIMS 1).
In the Sample.Basic application, the last sparse dimension is “Market”. The
members in the “Market” dimension are grouped into execution levels based
on outline relationships and formulas. Three subtasks are generated in levels:
6.1.3 Requirements
We have explained briefly that DB2 OLAP Server will use serial calculation
instead parallel calculation if it evaluates that the parallel calculation will not
improve performance, or if there are complex formulas that cause
interdependencies in the outline or calculation script. Now we will discuss in
more detail all the requirements to run a parallel calculation.
Before you enable parallel calculation, review this list of requirements. If your
application does not satisfy these requirements, it will use serial calculation
instead parallel calculation, even though parallel calculation is configured.
Isolation level:
The parallel calculation cannot be executed if you are using the committed
access isolation level. You should set an uncommitted isolation level for the
databases for which you are planning to use parallel calculation.
During a parallel calculation, DB2 OLAP Server automatically checks that the
commit threshold (synchronization point) defers commits until 10 MB of data
has been written, and increases the commit threshold if necessary for the
duration of that calculation pass only.
If you can allocate more than 10 MB extra disk space for calculation, consider
increasing your commit threshold value to a very large number for better
performance.
To set the isolation level of your database to uncommitted mode, use one of
the tools in Table 6-1.
ESSCMD SETDBSTATEITEM 18
If you try to execute a parallel calculation, using the committed isolation level in
your database, the calculation is executed in serial, and the following message is
generated in your application log file:
(1012677) Calculating in serial
Formulas:
One or more formulas present in a calculation may prevent DB2 OLAP Server
from using parallel calculation. These formula placements are likely to force
serial calculation:
– A formula on a dense member, including all stored members and any
Dynamic calculation members upon which the stored member may be
dependent, causes a dependency on a member in the dimension used to
identify tasks for parallel calculation.
– A formula contains references to variables declared in a calculation script
using @VAR, @ARRAY, or @XREF.
– A formula on a member causes a circular dependency. For example,
member A has a formula referring to member B, and member B has a
formula referring to member C, and member C has a formula referring to
member A.
– A formula on a dense or sparse member with a dependency on the
member or members from the dimension is used to identify tasks for
parallel processing.
If a formula prevents DB2 OLAP Server from executing parallel calculation,
one of the following messages is generated in the application log file:
(1012569) Formula on member memberName forces calculation to execute in
serial mode.
(1012570) A circular or recursive dependency along dimension
dimensionName forces calculation to execute in serial mode.
(1012571) Presence of variables or formulas with @XREF function forces
calculation to execute in serial mode.
These messages can help to determine which formula is forcing the serial
calculation.
If you need to use a formula that might prevent parallel calculation, you can
consider tagging the relevant member or members as Dynamic Calc if
possible so they are not featured in the calculation pass.
Incremental restructure:
If you have selected incremental restructure for a database and you have
made outline changes that are pending a restructure, do not use parallel
calculation. Unpredictable results may occur. Parallel calculation doesn’t
support incremental restructure.
Calculator hash tables:
During a parallel calculation, the hash tables are not used. You can leave any
of these items in your essbase.cfg configuration file, but they are ignored
during a parallel calculation:
– SET CALCHASHTBL
– CALCOPTCALCHASHTBL
– CALCHASHTBLMEMORY
If you really need to use hash tables in the calculation process, consider using
serial calculation instead parallel calculation.
In Example 6-3, the DB2 OLAP Server is executing the calculation in parallel
using four threads, for all databases in all applications.
Note: Changes in the essbase.cfg configuration file are not effective until
the ESSBASE service is restarted.
In Example 6-5, DB2 OLAP Server calculates all members for the dimensions
Product and Market using four threads, and it calculates all members for the
dimension Scenario using serial mode (one thread).
By default, DB2 OLAP Server uses the last sparse dimension in an outline to
identify tasks that can be performed concurrently.
Sometimes the distribution of data may cause one or more tasks to be empty,
that is, there are no blocks to be calculated in the part of the database identified
by a task, causing uneven load balancing. Empty tasks in the calculation process
are normal, although a high number of empty tasks can reduce the effectiveness
of parallel calculation.
You can verify the number of empty tasks in the application log like this:
(1012681) Empty tasks [907,1,0,0]
A high number of empty tasks can be caused, for example, if you have a FIX
statement on a member of the last sparse dimension.
The FIX command is often used to calculate a subset of the database. This is
useful because it allows you to calculate separate portions of the database using
different formulas, allowing calculate the sub-section much faster than you would
otherwise. FIX commands can only be used in calculation scripts, not in outline
formulas.
But when using the FIX command in the last sparse dimension that DB2 OLAP
Server uses to identify concurrent tasks for the parallel calculation, it can
generates high empty tasks number.
To resolve the high empty tasks number situation caused by the FIX statement,
you can enable DB2 OLAP Server to use additional sparse dimensions in the
identification of tasks for parallel calculation. In this case, more and smaller tasks
would be created, increasing the opportunities for parallel processing and
providing better load balancing.
Note: Unless you are actually fixing on the last sparse dimension, generating
many empty tasks, there is not a benefit in increase the CALCTASKDIMS so
much. If you increase the CALCTASKDIMS, DB2 OLAP Server can generate such
a large number of tasks that performance may decrease instead of increase.
In most tests, increasing task dimensions so much created much task
management overhead.
To increase the number of sparse dimensions used to identify tasks for parallel
calculation, use this procedure:
1. Add or modify CALCTASKDIMS in the essbase.cfg server configuration file,
or use the calculation script command SET CALCTASKDIMS at the top of the
script.
– Essbase.cfg:
CALCTASKDIMS [appname [dbname]] n
Example:
CALCTASKDIMS Onlinei Onlinei 2
– Calculation command:
SET CALCTASKDIMS [appname [dbname]] n;
The parallel calculation process can be executed using the Application Manager,
the ESSCMD commands, Integration Server, or Administration Services:
Application manager:
To execute a calculation process using the data load in the Application,
perform the following steps:
a. Connect to DB2 OLAP Server (Server -> Connect).
b. Select the application and database.
c. Go to Database —> Calculate.
d. Choose if you want to use the default calculation or if you want to use a
calculation script pre defined
If you want to use default calculation, you should use the first method
presented in 6.1.4, “Enabling parallel calculation” on page 232
(CALCPARALLEL in essbase.cfg), because the default calculation can’t
enable any calculation script.
e. Click OK.
Figure 6-4 shows an example of the calculation process using Application
Manager.
ESSCMD:
To call the ESSCMD command interface, on Windows platforms:
a. Go to Start —>Programs —>IBM DB2 OLAP Server 8.1 —> ESSCMD
command interface.
To call the ESSCMD command interface, on UNIX platforms:
a. Login with a DB2 OLAP Server user. This user should have the DB2 OLAP
Server environment variables configured. See the DB2 OLAP Server
installation Guide - “Setting up the environment to run the DB2 OLAP
Server”, on page 66.
b. Execute ESSCMD on an AIX command session.
Table 6-3 shows the different methods that can be used to enable parallel
calculation for each calculation ESSCMD command.
See the Technical Reference guide for the complete syntax of these
commands.
Integration Server:
Using Integration Server, you can only run a calculation process after a load
process (see Figure 6-5).
To execute a calculation using the Integration Server, follow these steps:
a. Open your OLAP Metaoutline, informing the data source name, user and
password to access the data source.
b. Go to Outline —> Data Load.
c. Select one of the options: “Use default calc script” or “Specify calc script”.
If you want to use default calculation you should use the first technic
presented in the 6.1.4, “Enabling parallel calculation” on page 232
(CALCPARALLEL in essbase.cfg) because the default calculation doesn’t
use calculation script.
d. In the next screen you can schedule your load process. If you don’t want to
schedule it, select the option Now.
e. Click Finish.
Administration Services:
You can use Administration Services to calculate DB2 OLAP Server
databases. You can execute the calculation process in the background and
continue using Administration Services for other purposes. (See “Managing
databases” on page 316, the number list 3 on page 330.).
Attention: Placing the largest sparse dimension at the end of the outline
for maximum parallel calculation performance may slow retrieval
performance. If you want to prioritize the query performance, you should
place the most queried sparse dimensions before the less queried sparse
dimensions.
1 Dense 129
2 Dense 2
3 Dense 11
4 Sparse 4
5 Dense 4
6 Sparse 6
7 Sparse 7
8 Sparse 8
9 Sparse 8
10 Sparse 3570
11 Sparse 1245
1 Dense 129
2 Dense 2
3 Dense 11
4 Dense 4
5 Sparse 4
6 Sparse 6
7 Sparse 7
8 Sparse 8
9 Sparse 8
10 Sparse 1245
11 Sparse 3570
Note: The maximum number of threads that DB2 OLAP Server can use to
calculate in parallel is four. If you specify a number greater than four, DB2
OLAP Server will assume four.
Note: Unless you are fixing on the last sparse dimension, generating many
empty tasks, there is no benefit from increasing CALCTASKDIMS.
The following hints are applicable for both methods: serial and parallel
calculation. But if you followed the recommendations above and the performance
of the parallel calculation is not improving, consider reviewing the following
additional recommendations:
1. Block size and block density:
The data block is the fundamental data storage structure in DB2 OLAP
Server.
When the database data blocks are much smaller than 8 kilobytes, the index
is usually very large, forcing DB2 OLAP Server to write and retrieve the index
from disk. This slows down the calculation process.
A data block size of 8 kilobytes to 100 kilobytes provides the optimal
performance in most cases, but the optimal block size is model specific. The
best practice is to test and isolate the optimal block sizes for specific
databases.
To change the data block size, you should balance rearranging the dense and
sparse dimension configuration of the database.
3. Direct I/O:
DB2 OLAP Server can work with the buffer cache to store compressed blocks
using buffered I/O or direct I/O. Buffered I/O uses the file system’s buffer
cache and the direct I/O bypasses the file system’s buffer cache using the
RAM memory. The direct I/O is able to perform asynchronous and overlapped
I/Os. See 8.1, “Direct I/O” on page 390 for more information.
Direct I/O can provide a faster response time for the calculation.
When you are using direct I/O, DB2 OLAP Server uses an area in the
memory, called the data file cache parameter, to store the compressed data
(.pag files). Using direct I/O you can customize the optimal cache to be used.
Note: When using direct I/O, you should customize the data file cache
parameter. If this parameter is not configured correctly or if the machine
doesn’t have free RAM memory to allocate the data file cache buffer, causing
paging pages to the disk, direct I/O will not improve performance.
To help you customize the right values for the data file cache parameter, you
can use the DB2 OLAP Server hit-ratio. See “DB2 OLAP Server hit ratios” on
page 264 for more information about how to monitor DB2 OLAP Server.
Note: This message only indicates the number of sparse dimensions that
DB2 OLAP Server analyzed to determine the subtasks (set of independent
tasks) for the parallel calculation. This message doesn’t indicate the
number of threads that DB2 OLAP Server will use to execute the parallel
calculation.
If the value specified in this message (X) is different from the value specified
in the CALCTASKDIM, the message (1012680) is complemented with the
following entry:
Usage of calculator cache caused reduction in task dimensions
The following message indicates that DB2 OLAP Server changed the commit
blocks interval. DB2 OLAP Server does this during the parallel calculation to
optimize the performance. In this example, the commit blocks interval was
changed from the default to 5839 blocks.
(1012568) Commit Blocks Interval was adjusted to be [5839] blocks.
This message indicates the number of empty tasks that will be executed in
each subtask. In this example, the first subtask has 907 empty task, the
second has 1 and the third and the forth subtasks don’t have empty tasks.
(1012681) Empty tasks [907,1,0,0]
The number of empty tasks affects the gains you can receive from parallel
calculation. The ideal number of empty tasks is 0. See 6.1.5, “Identifying
concurrent tasks for parallel calculation” on page 234 for more information.
The following messages indicate that a formula in the outline or in the
calculation script prevented DB2 OLAP Server to execute the calculation in
parallel mode. If you receive one of these messages, DB2 OLAP Server will
calculate in serial mode.
Table 6-6 SET MSG ONLY versus EstimateFullDBSize versus real calculation
SET MSG ONLY EstimateFullDBSize Result after a real
calculation
Input stage
Portion
of data
stage
organized
data
DLTHREADSWRITE=2
Thread2 Thread1
Disk
DB2 OLAP Server looks at each stage as a task and uses separate processing
threads, in memory, to perform each task. When a data load stage completes its
work on a portion of data, it can pass the work to the next stage and start work
immediately on another portion of data. Processing threads perform their tasks
simultaneously on different processors.
The DB2 OLAP Server can execute threads in parallel, exploiting CPU
parallelism. But the DB2 OLAP Server load process doesn’t have the ability to
execute I/O parallelism.
The I/O parallelism refers to the process of writing to, or reading from, two or
more I/O devices simultaneously. In parallel I/O you have multiple inputs in
parallel, asynchronously writing to multiple backup media in parallel. Each I/O
server is assigned the I/O workload for a separate file.
DB2 OLAP Server doesn’t execute I/O parallelism during the data load because
it doesn’t split the input file in more than one file and it doesn’t write the output to
multiple files in parallel. So the only parallelism possible during the load I/O is at
the hardware level.
DB2 OLAP Server can also execute parallel processing in a single processor
machine, taking advantage of processor resources that are left idle during I/O
thread wait time.
In Figure 6-8 the parallel load process is executed using three threads in a single
CPU. The CPU starts to run the first threads, but in a second instant, the threads
need to wait for an I/O on disk. DB2 OLAP Server can take advantage of the
processor that is in idle, and the second thread starts to run while the first thread
is waiting for the I/O.
Thread 2
waiting Thread 1
CPU Executing
Thread 3
waiting
Single CPU
CPU
Instant 2
Thread 2
Executing
Thread 3
waiting Single CPU
CPU Disk
Thread 1
waiting I/O
To enable the parallel load (see Example 6-9), you should set the configuration
setting DLSINGLETHREADPERSTAGE to FALSE (or do not set it, because the default is
FALSE). And you should set the number of threads you want to use for the
prepare stage (DLTHREADSPREPARE) and the number of threads you want to use for
the write stage (DLTHREADSWRITE).
See “6.2.1, “Understanding the parallel load” on page 250” for more information
about the load stages.
In Example 6-9, the DB2 OLAP Server is running the load process in parallel,
using 4 threads for the prepare stage and 3 threads for the write stage.
In Example 6-11 the DB2 OLAP Server is executing the load process in a
single thread per stage, only for the database onlineid in application
onlinea. Other databases in application onlinea and other applications are
using DLSINGLETHREADPERSTAGE FALSE.
In Example 6-12 the DB2 OLAP Server is executing the load process in a
single thread per stage for all databases in all applications.
In Example 6-13 the DB2 OLAP Server is executing the load process in
parallel (if the DLTHREADSPREPARE and DLTHREADSWRITE are greater than 1) for
all databases in all applications. This is the default.
DLTHREADSPAREPARE:
You should use this parameter to specify how many threads DB2 OLAP
Server may use during the preparation stage of the load process. The
preparation stage organizes the source data in memory for storing the data
into blocks.
Syntax:
DLTHREADSPREPARE [appname [dbname]] n
You can specify DLTHREADSPREPARE for individual databases, all databases
within an application, or for all applications and databases on the server.
In Example 6-14 the DB2 OLAP Server is executing the preparation load
stage in parallel, using 2 threads, only for the databases that are in
application onlinea. Other applications are using DLTHREADSPREPARE1, that is
the default.
In Example 6-15 the DB2 OLAP Server is executing the preparation load
stage in parallel, using 3 threads, only for the database onlineid in
application onlinea. Other databases in application onlinea and other
applications are using DLTHREADSPREPARE 1.
In Example 6-16 the DB2 OLAP Server is executing the preparation load
stage in parallel, using 4 threads, for all databases in all applications.
DLTHREADSWRITE:
You should use this parameter to specify the number of threads the DB2
OLAP Server may use during the write stage of the load process. The write
stage writes blocks from the memory to the disk.
Syntax:
DLTHREADSWRITE [appname [dbname]] n
You can also specify DLTHREADSWRITE for individual databases, all databases
within an application, or for all applications and databases on the server.
In Example 6-17 the DB2 OLAP Server is executing the write load stage in
parallel, using 4 threads, only for the databases that are in application
onlinea. Other applications are using DLTHREADSWRITE 1, that is the default.
In Example 6-18 the DB2 OLAP Server is executing the write load stage in
parallel, using 2 threads, only for the database onlineid in application
onlinea. Other databases in application onlinea and other applications are
using DLTHREADSWRITE 1.
In Example 6-19 the DB2 OLAP Server is executing the write load stage in
parallel, using 3 threads, for all databases in all applications.
This process is
executing only
The input data is
data load. It
a flat file.
does not
executing
member load.
Path where
input file is.
The value 1 indicates that The value n specifies that The value y indicates
the input file format is text. no rules file is used to that if some errors occurred
load the data. during the data load, the
process is canceled.
Figure 6-10 Data load using ESSCMD command interface
d. In the next screen you can schedule your load process. If you don’t want to
schedule it, select the option Now.
e. Click Finish.
Administration Services:
You can use Administration Services to load DB2 OLAP Server databases.
Using Administration Services, you can execute a load process in background
and continue using Administration Services for other things. See “Managing
databases” on page 316, list number 5 on page 329.
When you start the load process, the following message is added in the
application log file:
(1013091) Received Command [Import] from user [olapsi]
If you specify that you are doing the load process using a rule file, the following
message is added in the application log file:
(1019025) Reading Rules From Rule Object For Database [testlu]
If the parallel load is enabled, the following message is added in the application
log file:
(1003040) Parallel dataload enabled: [X] block prepare threads, [Z] block write
threads.
This message is generated only when the load process is finished. X is the value
that is specified in the DLTHREADSPREPARE configuration setting parameter.
Z is the value specified in the DLTHREADSWRITE configuration setting parameter.
If the parallel load is disabled, the following messages are generated in the
application log file:
(1013091) Received Command [DataLoad] from user [Administrator]
(1003037) Data Load Updated [X] cells
(1003024) Data Load Elapsed Time : [Z] seconds
You can use the following schema to monitor the data load process:
1. Start with the default parallel data load processing thread values whereby
DB2 OLAP Server uses a single thread per stage.
2. Perform and time the data load.
3. Monitor the CPU and memory activity during the data load process. See
“Tools to monitor parallel calculation and load” on page 264 for more
information about the major monitoring features provided by operating
systems.
Figure 6-12 contains an example of how to monitor CPU and memory using
the vmstat tool. We can see that the CPU is 66% idle, the machine isn’t
paging, and the wait for I/O is 27%.
If there is an I/O bottleneck during the load, the use of parallel load won't help
(it causes writing more data from different threads over multiple CPUs).
4. Monitor the data cache activity during the data load process. While executing
a parallel load, each thread specified by the DLTHREADSWRITE setting uses an
area in the data cache equal to the size of an expanded block. Depending on
the size of the block, the number of threads, and how much data cache is
used by other concurrent operations during a data load, it may be possible to
require more data cache than what is available.
The data cache is the memory area that is allocated to hold uncompressed
data blocks. The following formula determines the utilization of data cache
during the parallel load process:
utilization of data cache during the parallel load process = value of
DLTHREADSWRITE * block db size + cache used by other concurrent
applications.
As described in “6.2.1, “Understanding the parallel load” on page 250”, the DB2
OLAP Server can execute threads in parallel, exploiting CPU parallelism, but the
DB2 OLAP Server load process doesn’t have the ability to execute I/O
parallelism.
The data load process is a heavily I/O-bound task. The DB2 OLAP Server
parallel load (only CPU parallelism) can increase the load performance, although
the most effective strategy to improve performance on load data process is to
minimize the number of disk I/Os that DB2 OLAP Server must perform while
reading or writing to the database.
Because DB2 OLAP Server can increase the records processed per time period,
there is a stress on the I/O subsystem to write more number of blocks in the
same time period, as compared to previous releases. A block write typically
involves only the page file. DB2 OLAP Server does not allow physical partitioning
of the page file, which means all blocks are written to the same file. So the only
parallelism possible during I/O is at the hardware level, currently. This is the
reason we made the last stage of data load that does block writes single
threaded by default. If your disk hardware allows parallelism, then enable multiple
write-back threads.
To minimize the number of disk I/Os, you can use the following techniques:
Group sparse member combinations: Organize the source data
corresponding to the physical block organization.
See IBM DB2 OLAP Server Database Administrator’s Guide, SC18-7001 for
more information about these methods.
You can also use the direct I/O and (or) cache memory locking configurations to
provides a faster response time for the data load.
Direct I/O:
When you are using direct I/O, DB2 OLAP Server uses an area in the RAM
memory, the data file cache parameter, to store the compressed data
(.pag files). Using direct I/O you can customize the optimal cache to be used
(data file cache parameter).
Note: Using direct I/O, you should customize the data file cache parameter. If
this parameter is not configured correctly, or if the machine doesn’t have free
RAM memory to allocate the data file cache buffer, causing paging of pages to
the disk, the Direct I/O will not improve performance.
To help in customizing the right values for the data file cache parameter, you
can use the DB2 OLAP Server hit-ratio. See “DB2 OLAP Server hit ratios” on
page 264 for more information about how to monitor DB2 OLAP Server.
Data cache locking:
Cache memory locking gives the DB2 OLAP Server kernel priority use of
system memory. Enabling this feature may improve load performance
because the system memory manager does not need to swap and reserve
space for the memory used by DB2 OLAP Server caches. If you enable this
feature, leave at least one-third of the system memory available for non-DB2
OLAP Server kernel use.
The cache memory locking can only be enabled if you are using Direct I/O
instead buffered I/O.
See 8.1.5, “The cache memory locking option” on page 394 for more
information.
Previously we showed you how to use the parallel load and calculation and
discussed the benefits of using these new DB2 OLAP Server version 8.1
features. In this section we offer you a brief description of the tools you can use
to monitor the DB2 OLAP Server environment during the parallel data load and
calculation to better optimize your environment.
You can monitor the following DB2 OLAP Server caches using these hit ratios:
Index cache
Data cache
Data file cache
You can monitor the data caches using one of the following tools:
Application manager:
To use the application manager:
a. Go to Database —> Information —> Run-time.
b. Verify the hit ratio data cache value.
c. Use the Refresh button to update the data.
Figure 6-13 shows an example of monitoring the data cache using the hit ratio
of Application Manager. We can see the number 40 for the hit ratio on the
data cache.
In this case, 40% of the time that data was requested, it was already in the
cache.
Administration Services:
The new Administration Services feature also allows you to monitor the DB2
OLAP Server hit ratios. For more information about how to use this feature,
see “DB2 OLAP Server hit ratios” on page 264.
This tool gathers a variety of statistics regarding the performance of the system
and the connected applications. The output of GETPERFSTATS can vary
depending on what the system has just done, how long statistics have been
gathered, and the persistence of the gathered statistics.
For a complete description about the getdbstats output, see the online
documentation: DB2 OLAP Server Technical Reference.
To use the getdbstats command:
a. Open an ESSCMD command interface.
b. Login in the DB2 OLAP Server: login <hostname> <user> <password>.
c. Select the application: select <appname> <dbname>.
d. Execute: getperfstats.
Memory information.
In UNIX systems you can use the vmstat tool to monitor CPU and memory
activity during the calculation and data load.
Table 6-7 shows some values monitored by vmstat for AIX and how to
interpret them (please note that a memory page is 4K).
memory - avm Active virtual pages used. The number in the avm field divided by
256 will yield the value in megabytes.
memory - fre Free RAM pages. You can use this information to determine if
there is enough memory to run all the process.
page - pi Pages in per second. You can use this information to determine if
the system is paging or not.
page - po Pages out per second. You can use this information to determine
if the system is paging or not. If the paging numbers are high and
the fre column is close to zero, then you should either reduce the
number of memory-hungry programs or buy more memory. You
can reduce the memory used by applications reducing the size of
DB2 OLAP caches.
cpu - us Percentage of CPU used by user. You can use this information to
determine how the user programs are using the CPU (this include
the DB2 OLAP Server process).
cpu - sy Percentage of CPU used by system. You can use this information
to determine how the operating system is using the CPU.
cpu - id Percentage of CPU idle time. This can yelp you determine if your
CPUs are idle during the OLAP Server load process.
You can also use other tools provided by the operating system or you can use
third-party tools to view and analyze system utilization.
The DB2 OLAP Server parallel export feature has the ability to execute I/O
parallelism. The I/O parallelism refers to the process of writing to, or reading
from, two or more I/O devices simultaneously. In parallel export you can have
multiple outputs in parallel, asynchronously writing to multiple media in parallel.
The DB2 OLAP Server parallel export feature also has the ability to execute CPU
parallelism, because multiple threads (one for each output file) are used to
execute the process.
Export
Exporting data copies it to an ASCII text file (or files) that you specify, and this
does not compress the data. The export file contains data only. The export output
does not include control, outline, or security information.
To load data into a DB2 OLAP Server from the resulting ASCII files generated by
a parallel export, you should inform all the files generated in the parallel export.
You cannot use the Application Manager to execute export in parallel. Using
Application Manager you can only execute export in serial; the output is only one
file.
Note: Administration Services V6.5 does not have the ability to execute export
in serial or parallel mode.
Important: Using this method you cannot create output files in different
drivers or paths. You can specify only one path and base name.
-in input_filename The flag -in specifies that you will use a text input file to
determine the name of the parallel export output files. See
“Input file method:” on page 271.
output_filename If you don’t specify the flag -in, DB2 OLAP Server assumes that
you will specify a base name to be used as the output file name.
See“Base output file name method:” on page 271.
Important: If you don’t specify the full path for the base name, DB2 OLAP
Server will create the files in the <ARBORPATH\app. For example, executing
the following command “PAREXPORT -threads 2 onlinexp 1 0”, DB2 OLAP
Server will create the files onlinexp1 and onlienexp2 in the
<ARBORPATH\app>.
all | level0 | input You should choose one of these options. This determines the
level data that will be exported. The all data is the default. If you
don’t specify all, level0 or input, DB2 OLAP Server will assume
all.
in columns Specifies the columnar format to export the data. If this flag is
not specified, DB2 OLAP Server will export data in
non-columnar format. Non-columnar format is the default.
server If you don’t specify this flag, DB2 OLAP Server will assume
server.
FILE_NAME Specifies the output file paths and names. Use comma to
separate the file names.
These messages status can help you confirm if the export process is executing in
parallel, confirm the number of threads used, monitor the elapsed time, and
determine if the process was executed with success.
You can use this message to confirm if the number of threads is correct. If it is not
correct you can decide to cancel the export operation and re-execute with the
correct value.
For each thread that finishes the export process, the following message is
generated:
(1005031) Parallel export completed for this export thread. Blocks Exported:
[x]. Elapsed time: [9.99].
When executing the MaxL export command, the following messages are
generated in the end of process, indicating that it finished with success:
OK/INFO - 1013270 - Database export completed ['<appname>'.'<dbname>'].
DB2 OLAP Server can also exploit I/O parallelism, splitting the output into
multiple files (up to eight files). However, parallel I/O refers to the process of
writing to two or more I/O devices simultaneously. In order to have a parallel I/O,
you should specify different devices to the output files.
To specify different devices using the PARAEXPORT command, you should use
the input method (use the input file that has the names of the output files for the
parexport command).
You can also use the MaxL export command to specify different devices to
export your database data.
See 6.3.1, “Running parallel export” on page 271 for more information about the
PARAEXPORT ESSCMD command and MaxL export command.
This is a new feature of DB2 OLAP Server version 8.1 and works the same on all
operating system.
When DB2 OLAP Server creates multiple export files, it uses the requested file
name for the main file. An underscore and a sequential cardinal number are
appended to the names of the additional files, starting with _1.
For example, if the requested file name is expall.txt and the exported data would
exceed 6GB, DB2 OLAP Server creates four files, naming them: expall.txt,
expall_1.txt, expall_2.txt, and expall_3.txt.
Administration Services works with DB2 OLAP Server Release 8.1 or higher. It
also works with the previous version of DB2 OLAP Server (v7.1 fixpack 8), but
not all the features are available from Administration Services Console. For more
information about which OLAP Server releases support which features, see
Release Compatibility.
HTTP TCP/IP
Data Store
HTTP
Listener
Administration Services Both the middle tier Administration Server and the Client console are
Managing Administration Services
Console Java products that run on multiple platforms. The console provides an
intuitive interface and advanced tools to help you implement and
manage DB2 OLAP Server.
User single sign-on to the When you log into Administration Services, the Administration Server
OLAP Server enterprise handles your connections to OLAP Servers, applications, and
environment. databases. You do not need to connect to individual DB2 OLAP Server
objects or servers after you initial login to Administration Services.
Migrate DB2 OLAP You can migrate DB2 OLAP users and groups across DB2 OLAP
Managing security
Servers users and groups Servers, independently of an application, to and from any platform
across OLAP Servers. supported by DB2 OLAP Server. You can migrate individual users, or
you can migrate multiple users at one time.
Propagate user You can change a user’s password and then propagate the new
passwords across password to other DB2 OLAP Servers.
servers.
View and compare The Administration Services Console presents most OLAP Server
Managing OLAP Servers
information for different information in non-modal windows. This means that you can view
OLAP Server objects at information for multiple objects at the same time. For example, you can
the same time. open properties windows for multiple databases at the same time.
Manage user locks on The Administration Services Console provides a Locks window that
data and manage active enables you to view and manage user locks on data for an entire OLAP
user sessions at server, Server, an application, or a database. A session window enables you
application, and database to manage individual user sessions and requests.
levels
Migrate applications and Use the Migration Wizard to migrate applications and databases
Managing applications
databases across OLAP across OLAP Servers, to and from any platform supported by DB2
Servers. OLAP Server. For example, you can develop and test an application on
a Windows server and then migrate it to a production server running
UNIX.
When you migrate applications and databases, you can specify how
objects and security information are migrated with the application.
Filter, search, and Using Log Analyzer, you can filter, search, and analyze OLAP Server
Managing logs
analyze OLAP Server logs logs and applications logs and even see the log information in graphical
and applications logs. reports.
New MaxL Script Editor to A new MaxL Script Editor is integrated with the Administration Services
Managing scripts
manage and execute Console. From the MaxL Script Editor, you can create, edit, save, and
MaxL scripts and execute MaxL statements and scripts. You can also use the editor to
statements. type and execute MaxL statements.
Background processing to You can execute calculations in the background. You can then perform
execute calculations. other tasks or exit the console while the process continues to run. A
message panel in the console window tells you when the process is
Miscellaneous
completed.
E-mail the contents of the You can send OLAP Server information to other administrators or to
console window, such as Hyperion Technical Support via e-mail. For example, you can send
properties windows and properties information or log information via e-mail.
log windows.
AS Console AS Console
Essbase Essbase
HT
TP P
Windows or Unix TT
H
AS
Server
DB2 OLAP
Client
TC
P PI
PI P
TC
TCPIP
Administration
Services
Console
HTTP
Administration
Services
Server
TCPIP
Note: Administration Services does not support multiple DB2 OLAP Servers
running on different port numbers. All DB2 OLAP Servers you connect to
through Administration Services must be running on the same port. If you
have changed the default port for an DB2 OLAP Server, you need to change
all DB2 OLAP Server ports to use that port number. For more information, see
7.3, “Administration Services configurations” on page 283.
In the following sections we show you, step-by-step, how to create, delete, and
rename users and groups, how to add users in a group, and how to change user
privileges.
1. Creating users:
To create users in the DB2 OLAP Server using the Administration Services,
Console, follow these steps:
a. From Enterprise View or a custom view, find the DB2 OLAP Server for
which you want to create a user.
b. Expand the Security icon below that server, and then select the Users
icon.
c. Right-click and select the option Create User from the pop-up menu. See
Figure 7-5.
d. In the dialog box, Create User on OLAP Server, enter the username.
Important: Check only the Disable username option if you want to lock the
user out of the system for any reason. Only a Supervisor can re-enable a
username that has been disabled.
g. In the option group, User type, select either User or Supervisor. See the
DB2 OLAP Server Administration Guide, Chapter 15, for more information
about DB2 OLAP Server privileges.
h. Click OK to add the user.
Note: If the user you are creating will need to use Administration Services to
manage DB2 OLAP Server, you must also create that user on the
Administration Services Server. See “Creating users in Administration
Services” on page 372 for information about how to create a user on the
Administration Server.
2. Creating groups
To create a group of users in the DB2 OLAP Server using Administration
Services Console, follow these steps:
a. From Enterprise View or a custom view, find the DB2 OLAP Server for
which you want to create a group.
b. Expand the Security icon below that server, and then select the Groups
icon as shown in Figure 7-6.
c. Right-click and select the option Create Group from the pop-up menu.
d. In the dialog box, Create Group on OLAP Server, enter the group name.
e. In the option, Group type, select either User or Supervisor.
f. Click OK to add the group.
3. Deleting/Renaming users and groups
To create or delete users and groups in the DB2 OLAP Server using
Administration Services Console, follow these steps:
a. From Enterprise View or a custom view, find the DB2 OLAP Server for
which you want to delete or rename a user or a group.
b. Expand the Security icon below that server, and expand the Users or
Groups icon.
Figure 7-7 Using Administration Services to delete and rename users or groups
d. If you are renaming the user name or group name, input the new name.
e. Click OK.
f. Click the single-arrow button that points to the Member of groups list box.
g. Click Apply to add the user to the groups listed in the Member of groups
list box.
5. Granting application and databases permissions
The permissions you grant a new user or group apply to the entire OLAP
Server. After creating the user or group, use the following step to grant the
user or group different permissions for specific applications and databases:
a. From Enterprise View or a custom view, expand the Security icon under
the appropriate DB2 OLAP Server.
b. Expand the Users or Groups icon, and select the user or group to whom
you want to grant application/database permissions.
c. Right-click and select Edit properties from the pop-up menu.
d. Select the App/Db Access tab and expand the Applications tree as
shown in Figure 7-10.
Note: The tab App/Db Access is not selectable, since the DB2 OLAP User
that is being used to connect in Administration Services has full access to all
applications and databases.
This new Administration Services feature can help you to replicate security in
your Enterprise Services cluster environment.
You can use the Administration Services migration feature to migrate users,
passwords, groups, and privileges between multiple servers in the cluster
environment. In an Enterprise Services cluster environment, you should have the
same users, passwords, groups and privileges in the multiple DB2 OLAP Servers
that participate of the cluster.
The Enterprise Services also has features to migrate security (users and
groups), but it doesn’t migrate passwords and privileges.
Note: The IBM Security Migration tool is another alternative to migrate users
and groups from a DB2 OLAP Server to another. You can use the IBM
Security Migration tool to migrate users between machines without needing
DB2 OLAP Server client installed on the machine on which you are running
Security Migration Tool. See 8.2, “Security Migration Tool” on page 396 for
more information.
e. Click OK to finish.
The users user1, user2 and user3 are automatically migrated from the
localhost (DB2 OLAP Server windows machine) to Cayman (DB2 OLAP
Server AIX machine). All the privileges and characteristics of these users are
also migrated. See Figure 7-13.
Single user,
Create/delete users,
groups, applications
is disabled (as the
localhost user).
User1 migrated.
User1 has
application designer
on application Demo.
Source DB2
Source username.
OLAP Server.
Target DB2
OLAP Server. New username.
Propagating passwords
A new function, implemented by Administration Services, provides the capability
to propagate a user password. It allows you change a user password in multiple
DB2 OLAP Servers at the same time. To use it, you should have the same user
created on the multiple DB2 OLAP Servers on which you want to propagate the
new password.
This function can help you administer multiple DB2 OLAP Servers, for example,
in an Enterprise Services cluster environment.
In Figure 7-15 the new password, pass1, is propagated to two DB2 OLAP
Servers (Sicily and Cayman) at the same time. These two DB2 OLAP Servers
are in an Enterprise Services cluster environment.
Onlinei1
Onlinei
Onlinei
Onlinei
Enterprise
ol a
Services Onlinei2 pu
se Administration
Onlinei r1
OnlineiOnlinei Cluster
pa
ss
Services
Users: =p
a ss
Olapuser1 1
OLAP Sessions
Olapuser2
TCP/IP
1
a ss
=p
ss
pa
Onlinei1
1
er
"pass1" to user
ol
Onlinei Olapuser1
Onlinei
Onlinei2
Onlinei
Users:
Olapuser1
Olapuser2
f. When all servers to which you want to propagate the password are listed
in the Selected list box, click OK.
You can select an authentication model when you are creating a user. You can
change the authentication model for an existing user by editing the user's
properties. You can also select an authentication model to use when you copy an
existing user.
Use the following procedure to create DB2 OLAP Server users using external
authentication using Administration Services:
1. Add the AUTHENTICATIONMODULE (see Table 7-2) configuration setting to
the server configuration file essbase.cfg (see Example 7-1). The essbase.cfg
file is stored in the ARBORPATH\bin directory.
library_name The directory path and name of the DB2 OLAP Server
library that implements the authentication protocol. This
library is located in ARBORPATH\bin. The library name
depends on the operating system where you have installed
DB2 OLAP Server:
WINDOWS: essldap.dll
SOLARIS: libessldap.so
AIX: libessldapS.a
HP: libessldap.sl
max_wait_time You should use the value "x." This value is not used in this
DB2 OLAP Server version. It is reserved for future use.
@hostname:portname The host name and port number of the server that the
authentication protocol contacts to authenticate the user.
The default port used by LDAP server is 389.
Note: You must type the character "@" before the host
name, and type the character ":" between host name
and port number.
In Example 7-1 the LDAP protocol is used to authenticate DB2 OLAP users.
The DB2 OLAP Server is installed in an AIX machine, so we are using the
library file libessldapS.a. The x should be specified in this DB2 OLAP version
(it will be used in future releases). The portion of the DN (Distinguished
Name) used in our LDAP server is: cn=ldapuser,o=ibm. The hostname of the
LDAP server is fermium and the port used by the LDAP server is 389.
The DN should not have any spaces.
h. Click OK.
The following new features are available in Administration Services to help you
manage you DB2 OLAP Servers, applications and databases:
Migrating application and databases across servers: To facilitate
application migrations from development to testing or production environment,
you can copy entire applications from one DB2 OLAP Server server to
another, regardless of platform. You can also use this option to copy
applications and databases in your Enterprise Services cluster environment.
Run processes in background: Administration Services allows you to
perform multiple tasks simultaneously, and run processes in the background.
You can perform cross-server operations, and manage active user sessions
and requests.
View and compare information: You can view and compare information for
different DB2 OLAP Server objects at the same time. For example, you can
open properties windows for multiple databases at the same time.
Using Administration Services, you can see properties of different DB2 OLAP
Servers at the same time. To do the same in Application Manager you need to
open two or more Application Manager sessions.
Use the following procedures to display and change DB2 OLAP Server
properties and to manage users sessions and locks on your different DB2 OLAP
Servers.
In the DB2 OLAP Server Properties” dialog box you can display and change
security properties, display license properties, display statistics properties,
display environment properties, display operational system properties and
display disk properties.
See Table 7-3 for more details about the DB2 OLAP Server properties:
Security Login attempts allowed Number of consecutive, incorrect user name and/or password
before username is entries allowed before the user name is disabled. This number
disabled must be between 0 and 64,000.The default settings are 0,
meaning that the options are turned off.
Number of inactive Specifies the number of days a user account may remain
days before username inactive before the user name is disabled. The default settings
is disabled are 0, meaning that the options are turned off.
Number of days before Number of days a user may retain the current password. After
user must change the allowed number of days, the user is prompted at login to
password change the password. This number must be between 0 and
64,000. The default settings are 0, meaning that the options
are turned off.
Inactive limit (minutes) Specifies the number of minutes of user inactivity permitted
before system automatically disconnects the user. The default
is 60 minutes, the minimum is 5 minutes and there isn’t a limit
for the maximum. The value 0 means that users can stay
inactive connected until the server is shut down.
Check for inactivity Specifies how often DB2 OLAP Server checks for inactivity, in
every (minutes) minutes. The default is 5 minutes, the minimum is 1 minute
and there isn’t a limit for the maximum.
License License information Version: displays the current version of DB2 OLAP Server
running on the machine.
License number: displays the DB2 OLAP Server serial
number.
License expiration: displays the expiration date of the DB2
OLAP Server license. If you are using a try version.
Number of installed ports: displays the total number of ports
installed on the server. DB2 OLAP Server provides one
additional reserve port for Supervisors.
Network Protocol: displays the network protocol installed on
the server machine.
Installed options Lists the features that were put in place when DB2 OLAP
Server was installed.
Essbase system files Lists the DB2 OLAP Server DLL files in server memory, their
locations, and version numbers.
Statistics Statistics information. Server start time: displays the last the OLAP Server was
started.
Elapsed time: Displays how long, in hours:minutes:seconds,
the OLAP Server has been running since last time it was
started.
Ports in use: displays the number of ports in use on the server
(client connections).
Ports available: displays the number of ports available for use
on the server. If the Ports Available number is -1, the reserve
port for supervisors is already in use.
Environment Essbase environment Displays DB2 OLAP Server environment variables, as defined
variables during installation (for
example, ARBORPATH).
OS Operation System Displays OS version, displays date and time the operating
system was started or rebooted, displays how long the OS
has been running and displays the current time
CPU Displays the number and type of the CPUs on the server
machine.
Disk drivers Disk drivers information Displays the drive name, the label of the volume, the type of
drive (Fixed, Removable, RAM, Remote, or Unknown), the file
system used by the drive (FAT, HPFS, or NTFS), the total
amount of space on the drive in kilobytes, the amount of
space used on the drive in kilobytes, free (available) space on
the drive, in kilobytes.
Using Application Manager, you should open one Application Manager session
for each DB2 OLAP Server you want to display or change the properties.
2. Managing sessions:
Using Administration Services, you can log off users sessions.
A session is an instance of a user who is actively logged in to a DB2 OLAP
Server at the system, application, or database level. A session begins when
the user logs in, and ends when the user logs out or is logged out by
someone else. A user can have more than one session open at the same
time.
Note: If you are using the Enterprise Services connection pooling feature, you
can have multiple users connections to DB2 OLAP Server using the same
session. In this case, Administration Services will display a session of the
Enterprise Service user that can represents multiple users connections. See
5.3.3, “Connection pooling” on page 134.
In Figure 7-21 we selected the session 1432354805 and these options: Log
off, all instances of user, on selected database. All the connections of the user
selected, in the database selected, will be disconnected.
In this case all connections for user Administrator on database onlinei are
logged off (sessions 1432354805 and 3789553650).
Figure 7-22 shows the result of the log-off.
3. Managing locks:
Occasionally, you may need to release a user lock. For example, if you
attempt to calculate a database that has active locks on data, the calculation
must wait when it encounters a lock. If you release the lock, the calculation
can continue.
Removing a user's lock disconnects the user from the current session.
To open the dialog box, OLAP Locks, go to Enterprise View or a custom view,
expand the OLAP Server, and double-click the Locks icon.
Managing applications
Using Administration Services you can create applications, start and stop
applications, delete and rename applications and see properties of different
application at the same time.
Deleting
application.
Creating application.
Data storage You cannot change this option on DB2 OLAP version
type 8, because it doesn’t support DB2 relational database
storage type anymore.
Renaming a
database.
Deleting a Creating a
database. database.
General Description and The description is optional and it cannot be longer than 79 characters.
database type The database type indicates if it is normal or currency.
Minimum Determines the minimum access level for the database. Select one of
access level the following:
None: all users can access the database according to their
permissions.
Read: grant all users to view files, retrieve data vales and run report
scrips.
Write: grant all users to read access and update data values.
Calculate: Grant all users perform calculations.
Db designer: Grants all users all access to the database.
Data retrieval Changing these buffers you can only increase performance in data
buffers retrieval (end users reports as Spreadsheet add-in and report scripts).
It will not increase performance on load and calculation processes.
Buffer size: used to process and optimize retrievals from Spreadsheet
Add-in and report scripts.
Sort buffer size: hold unsorted data to be sorted during retrievals.
Dimensions Dimensions Displays the number and name of dimensions defined in the outline.
information Displays the number of members in each dimension and the number
of members that can store data (shared or label only members cannot
store data). Indicates whether each dimension is sparse or dense.
Statistics Statistics Displays the time the database was started, how long it has been
running, the number of users connected to it and the data and time at
which DB2 OLAP Server collected the information for the Statistics
tab.
Run-time Displays run-time statistics for each cache hit ratios and read/write
operations.
Caches Cache memory Select this check box to lock the memory used by the index cache,
locking data file cache and data cache into physical memory. It will gives the
DB2 OLAP Server kernel priority use of system RAM. It can only be
used if direct I/O is enabled.
Cache size Determines how much memory DB2 OLAP Server allocates for index
cache, data file cache and data cache.
Index page Specify the size of the index page. In some DB2 OLAP Server
settings versions you cannot change this value. This setting does not apply
when using direct I/O (that will use a fixed value of 8KB). A change on
this value only take effect if the database is empty.
Transactions Committed Select this option to allow transactions to hold read/write locks on all
access data blocks involved with a transaction until the transaction completes
and commits. You can determine how many seconds a transaction
waits for access to locked data blocs (Wait) and you can enable or not
users read-only access to data blocks that are locked for the duration
of another concurrent transaction (pre-image access).
Storage Current I/O Displays the current (active) I/O access mode for the database.
access mode
Pending I/O Select the I/O mode that DB2 OLAP Server will uses to access the
access mode database:
Buffered I/O: uses the file system’s buffer cache.
Direct I/O: Bypasses the file system’s buffer cache and is able to
perform asynchronous, overlapped I/Os, providing faster response
time.
Currency Currency You can select a currency database to link to for currency conversion
information calculations. If no currency database is linked to the current database,
the currency database is [None].
You can use multiple (multiple local data values in the database by
exchange rates) or divide conversion methods (divides de data
values in the database by exchange rates).
You can specify the name of the member of the currency type
dimension that to be used as the default in currency conversions
(default currency type member).
The country dimension node displays the name of the country
dimension as defined in the outline in the country dimension node.
The time dimension node displays the name of the time dimension as
defined in the outline.
The category dimension node displays the name of the accounts
dimension tagged as Cur Category in the outline.
The currency partition node displays the name of the dimension
tagged as Currency partition in the outline.
Modification Modifications The operation column indicates the type of operation performed, the
information user column displays the name of the user who performed the
operation. You can also see the start and end time for the operations.
If appropriate DB2 OLAP Server displays a note about the operation.
The properties of dimensions and members define the rules of the dimensions
and members in the design of the multidimensional structure.
You can edit the following dimension or member properties in the member
properties information folder:
Member Information:
– Name: Member name
– Comment: Comment box
– Consolidation: The method used to consolidate members (addition,
subtraction, multiplication, division, percent or ignore)
– Two-Pass calculation: To specify that members must be calculated in two
passes. This option is not valid for attribute dimensions and members.
– Data storage: To specify the option used to store data in this member
(store data, dynamic calc and store, dynamic calc, never share, label only
and shared member).
Figure 7-31 displays the outline properties for the database e-BankDB on the
DB2 OLAP Server installed on the cayman machine (hostname).
You can edit the following outline properties in the outline editor properties folder:
Case-sensitive members: Selects whether members whose names differ
only by case are treated as separate members in all member comparison and
search operations. You should set true or false. The True option indicates that
the members are treated as case-sensitive. The default is false.
Alias table: Right-click this option and you can create an alias table in the
active outline, clear all tables in the active outline or delete all tables in the
active outline. Right-click the table name and you can set the table as active,
create another table, rename, copy, clear and delete the specific table. You
can also export and import an alias table, using the an ALT file type.
Attribute settings: For the Prefix/Suffix format, define a prefix or suffix to
attach a value in the member name; for example, Population_50000.
Calculation dimension names: Change this values only if you want a
different name for the Attribute Calculations dimension or for the names of
any of its members (sum, count, minimum, maximum and average). The
names specified on these boxes are the names used on reports and
spreadsheets.
Boolean, date, and numeric attribute settings: Use boolean options to
modify member names for Boolean and date attribute dimensions. You can
specify a value for true and for false. For example, you can specify “Yes” for
true and “No” for false.
You can also edit the outline properties using the outline toolbar. Figure 7-32
explains the main buttons in the outline editor toolbar. The buttons can be
alternated according to the object which is selected in the outline.
Sort the
members
Add a
child
Delete the
member Add a Edit formula
sibling
The Administration Services outline editor has a new graphical interface to help
you define the calculation formula for your outline (formula editor). To access the
formula editor, on the outline editor, right-click the member name and select the
option edit, select the folder Formula. Figure 7-33 shows the Formula editor
components.
In the new Formula editor you can select the member name into the editing panel
without typing them. It also has a function template where you can choose the
formulas to be pasted in the editing pane. With the member selection and the
function templates, you can build a quick and easy prototype of your formula
using only the mouse. You can then customize the formula in a text editor.
For more information about developing formulas, see the DB2 OLAP Server
Administrator's Guide.
Use the Verify button to check if your formula syntax is correct. If DB2 OLAP
Server find some error, it will generate a message in the Administration Services
Console message panel. See Example 7-2.
Example 7-3 shows some messages that can be generated in the Administration
Services Modifications folder.
You can use the Administration Server to enable or disable Hybrid Analysis in
your outline and you can also visualize the Hybrid Analysis members in the
Administration Services outline editor. For more information, see 3.5.5, “Hybrid
Analysis and Administration Services” on page 57.
5. Loading data:
You can use Administration Services to load data and to build dimension in your
DB2 OLAP Server database.
A data load loads data values from a data source into a DB2 OLAP Server
database.
A dimension build loads dimensions and members from a data source into a DB2
OLAP Server outline.
You must have write permissions for the database into which you are loading the
data or members.
1. To load data or to build dimension in the database using Administration
Services Console, follow these steps:
a. From Enterprise View or a custom view, select the database in which you
want to work.
b. Right-click and select Load data from the pop-up menu.
c. In the dialog box, Data Load, select one or more data sources.
d. If you are performing a data load, select Load data.
e. If you are performing a dimension build, select Modify outline.
2. You can execute data load and dimension build at the same time.
a. If you are using a rules file, select Use rules and select the rules file.
This option
enables data
load.
This option
enables
the build
dimensions. This option
is only
This option enabled
is used to when using
execute the rule file.
load in
background In the
interactive
mode the
.
errors are
displayed
and DB2
continues
the process.
3. Calculating data:
You can track the calculation process that is executing in background through
the dialog box, Background process status. For more information, see “Executing
processes in background” on page 342.
You can use this feature to migrate applications and databases in your
Enterprise Services cluster environment, for example when you are configuring
the cluster environment for the Enterprise Services (you should create multiple
copies of an application on different machines, or on the same machine).
The following information is copied with the application during migration using
Wizard Migration:
Any databases and objects you select in the Migration Wizard.
Application and database properties, such as cache settings, with the
exception of disk volumes.
Users and groups, according to the options you select in the Migration
Wizard. Passwords are migrated. After migration, you can edit user and group
properties on the target server without affecting user/group permissions on
the source server.
Filters and their associations. You do not need to re-assign filters to
users/groups after migration.
Linked reporting objects (not available in Alpha release).
Partitions.
The following information is not copied during migration using Wizard Migration:
Data (.pag and .ind files).
Non DB2 OLAP Server files, such as spreadsheet files, text files, MaxL script
files, ESSCMD scripts.
The Essbase configuration file (essbase.cfg).
Disk volumes.
Note: As an alternative to migrate the data (.pag and .ind files) you can use
the IBM Security Migration tool. See 8.2, “Security Migration Tool” on
page 396. for more information.
The error file is only generated if DB2 OLAP Server VALIDATE command find
some problem in the database.
If the VALIDATE returns errors, revert to a backup that is free of errors.
2. If the database that you are migrating already exists in the target machine
(you are only updating it), back up the databases on the target machine
before migrating. For more information about how to backup a DB2 OLAP
Server database, see the IBM DB2 OLAP Server Administrator’s guide,
chapter 44, Backing up and restoring data.
In the first Migration Wizard window you should select if you want to perform a
novice or advanced migration.
Using the novice option, the DB2 OLAP Server users and groups are not
migrated. To migrate users and groups, select Advanced. During a novice
migration, you can select which types of objects to migrate, but you cannot select
individual objects. For example, you can choose to migrate all report scripts
associated with an application, but you cannot select individual report scripts.
We will give you an overview of each panel in the Migration Wizard and how to
use them:
1. In the first Wizard Migration panel you should inform the user level (novice or
advanced), the source DB2 OLAP Server and the target DB2 OLAP Server.
Figure 7-37 shows the first Wizard Migration panel. In this case we are
selecting to migrate applications and databases from the DB2 OLAP Server in
the sicily machine (AIX) to Cayman (also AIX) environment. We are using the
advanced option.
4. If you are executing an advanced migration you will receive more three dialog
boxes where you can:
– Specify how security permissions for users and groups are migrated. The
permissions on the source server are unaffected. The options for
migrating permissions are:
Monitoring performance
Administration Services provides tools to help you monitor and adjust the DB2
OLAP Servers caches.
Some of the Administration Services Console tools that you can use to monitor
DB2 OLAP Server performance are:
1. Runtime hit ratios:
You can use the hit ratio values to monitor the usage of DB2 OLAP Server
cache. The hit ratio data of a cache represents the percentage of time that a
requested piece of information that is already in the cache.
The Background Process Status window (see Figure 7-43) lists all processes
that are currently running in the background or that have completed. All
background processes are displayed in this list until you manually delete them.
To delete a row from the list, select the row and click Delete.
The DB2 OLAP Server error messages returned are identified by an error
message number. Figure 7-5 shows an OLAP Server message error example.
Notice that a DB2 OLAP Server error number was generated. You can use the
DB2 OLAP Server Error messages manual to get more information about the
error message.
You can execute the following actions with the messages displayed in the
Administration Services Console message panel:
Clear
Select (to copy and paste)
Save (in a text file format)
Print
Send an e-mail with the messages (See “E-mail” on page 357).
To execute these actions, right-click the menu panel. See Figure 7-45.
If you want to close the Administration Services panel, right-click the Messages
button and select the option Hide. See Figure 7-46.
To see the Administration Services message panel again, select the menu View
and select the Messages check box.
Administration Services provides a easy way to analyze the DB2 OLAP Server
logs and application logs. Using the Administration Services log analyzer you
can filter messages by date, by message type and by message content.
In Figure 7-47, we are filtering the messages for contents where the username
that received the message is olapcay.
In the drop-down list, Show messages, you can choose one of the following
options:
That contains
That does not contain
For user
For application
For database
Figure 7-49 shows the log analyzer message folder. The message folder has a
intuitive graphical interface that helps you easily find and understand the DB2
OLAP Server messages and the application messages.
Information message
Warning message
To open the DB2 OLAP Server log chart viewer, go to Enterprise View or a
custom view, right-click the DB2 OLAP Server on which you want to work, and
select the option View log charts.
To open the application log chart viewer, go to Enterprise View or a custom view,
expand the DB2 OLAP Server, expand the applications icon, right-click on the
application on which you want to work, and select the option View log charts.
DB2 OLAP Server version 8.1 provides new messages in the DB2 OLAP Server
messages file (ABORPATH/Essbase.cfg) that provide more control about
security. The new server log messages include:
You can use Administration Services and Log Analyzer to display these
messages as shown on Figure 7-50.
The MaxL Script Editor in Administration Services provides new features that
helps you build MaxL statements interactively in a very easy graphical interface.
To access the MaxL Script Editor in Administration Services, go to Edit —> New
—> folder Scripts —> MaxL Script —> OK.
The tools provided by the MaxL Script Editor in Administration Services are:
Auto-completion feature: when you start typing text in the editor, a list of
possible keywords/values is displayed. After you select the appropriate
keyword/value and press the spacebar to continue, successive drop-down
lists are displayed.
For example, in Figure 7-51 we typed d in the editor, and we were prompted
to select from a list of possible MaxL keywords: display and drop.
If you select display and press the spacebar, you are prompted to select from
a list of additional keywords for display — for example, application.
If you select application and press the spacebar, you are prompted to select
from a list of possible values for display application matching the
applications on the OLAP Server.
Toolbar:
When you open the MaxL Script Editor, a special toolbar is displayed.
Figure 7-53 shows the functions available in this toolbar.
For more information about MaxL language see DB2 OLAP Server MaxL User’s
Guide and for the complete MaxL syntax reference see the DB2 OLAP Server
Technical Reference.
Calculation scripts
Using the Calculation Script Editor provided by Administration Services you can
edit your calculation scripts using a customized right-click menu or using a
toolbar. You can check the syntax, search for members or commands and
point-and-click for member selection. The Administration Services Console
Calculation Script Editor has also a color-coding of calculation script syntax,
improving readability.
Calculation scripts specify exactly how you want to calculate a database. For
example, you can calculate only some dimensions of a database or use special
formulas. Calculation scripts can override the database consolidation as defined
in the database outline.
You can associate a calculation script with a specific database or with all
databases in an application. In Enterprise View, a container node for calculation
scripts appears under each application and database.
Using the Calculation Script Editor (see Figure 7-56), you can search for
members or commands and point-and-click for member selection. You can also
type the contents of a calculation script directly into the text area of the
Calculation Script Editor.
There is a toolbar in the menu, to work with calculation script editor. See
Figure 7-57.
Report scripts
The Report Script Editor in the Administration Services Console, provides
features to help you to build report scripts quickly. The syntax is color-coded to
improve readability, and members and report commands are displayed in tree
views within the editor to help you insert them into your scripts easily.
Using the Report Script Editor (see Figure 7-58), you can search for members or
commands and point-and-click for member selection. You can also type the
contents of a report script into the text area of the Report Script Editor.
When you open the Report Script Editor, a toolbar is displayed. This toolbar is
the same that is displayed for the Calculation Script Editor. See Figure 7-57 for
more information about the toolbar.
After the execution of a report script in the Report Script Editor, the Report
Viewer is displayed with the results as shown in Figure 7-59.
For more information about the report commands to build a script, see DB2
OLAP Server Technical Reference.
7.4.5 Miscellaneous
Additional miscellaneous are provided in Administration Services.
E-mail
You can send DB2 OLAP Server information to other administrators via e-mail.
For example, you can send properties information or the content of
Administration Services logs via e-mail to another Administration Services
administration adding some comment. To enable this functionality, an outgoing
mail (SMTP) server must be specified on the Administration Server computer.
The SMTP (Simple Mail Transfer Protocol) is a protocol for sending e-mail
messages between servers.
To specify an outgoing mail server, start the Administration Server and in the
Essbase Administration Server window, select the menu View and the option
Configuration. In the E-mail server area, enter the name of the SMTP server.
Access to manuals
From the Administration Services Console Help menu you can easily access the
DB2 OLAP Server manuals:
Calculation functions: DB2 OLAP Server Technical Reference, Essbase
Functions session
Calculation commands: DB2 OLAP Server Technical Reference, Calculation
Commands session
Report commands: DB2 OLAP Server Technical Reference, Report Writer
Commands session
MaxL statements: DB2 OLAP Server Technical Reference, MaxL Statements
session
Configuration settings (essbase.cfg): DB2 OLAP Server Technical Reference,
Essbase Configuration File Settings (essbase.cfg) session
Essbase error messages: DB2 OLAP Server messages help
Database Administrator’s Guide: DB2 OLAP Server Administrator’s Guide
Print properties
In the Administration Services Console you can print contents of MaxL scripts,
Reports, Scripts, Message panel and other windows. To print these contents,
click the object you want to print and Select File —> Print.
Table 7-6 contains a checklist you can use to make sure every required step is
being performed.
Installing DB2 OLAP Server Installing DB2 OLAP Server: 7.5.2, “Installing DB2 OLAP
v8.1 Server components” on
– Full OLAP Intallation
page 360
– OLAP Runtime Client
Checking the ARBORPATH , “Verifying the ARBORPATH
variable” on page 361
If you already have a previous version of an OLAP Server running in the machine
where you are about to install Administration Services, you must migrate it to the
6.5.1 version.
If you attempt to install the Administration Server on a computer that does not
have, at a minimum, Essbase Runtime Client Release 6.5.1 installed, the
Administration Services installation program reminds you to do so before
proceeding with installation.
Full installation
If you want to install the full code of DB2 OLAP Server, follow the installation
procedure described in DB2 OLAP Server Installation Guide.
Runtime client
To install only the runtime client, follow the steps explained in the DB2 OLAP
Server Installation Guide.
If the ARBORPATH environment variable is not set, the installation program uses
a default location for installation that you can modify.
For more information about how to set the ARBORPATH variable, refer to the
DB2 OLAP Server Installation Guide.
During the installation, you will be asked to choose which components to install.
The options are:
Perform a complete installation, which installs the server and the graphical
console in the same machine.
Install only the server component.
Install only the Graphical console.
After this step, you must provide a path for the directory where to install the
product, for example C:\IBM\db2olap\eas. At the end, you need to choose if you
want to create a shortcut for Administration Services.
Figure 7-62 shows the Administration Server window. You will receive a message
like the one shown in the figure. Leave the window open.
Clear log
Change View
Tabs
Now look at the Change View tabs in the bottom part of the screen. These tabs
enable you to see different kind of information:
Log tab: All the messages from the server can be seen from this screen.
Configuration tab: These are the Administration Services default
configuration parameters like the port in which Administration Services listen,
the memory usage parameter and the e-mail server, to send notification
e-mails.
Environment tab: This is the JAVA environment information. The classes and
their values.
You can change the view by just clicking in any of the tabs.
Then choose a directory where to store the log file, for example,
C:\IBM\db2olap\eas\logadmserver. See Figure 7-64.
You also have the possibility of clear all the messages from the screen using the
Clear Log button.
To stop the Administration Server:
a. From the Essbase Administration Server window, and select Server —>
Stop Administration Server (See Figure 7-65) or click the stop server
button. You will receive a message like the following in the server screen:
Stopping Hyperion Essbase Administration Server...
Current time stamp: Mon Jun 03 11:10:35 PDT 2002
Stopping service Tomcat-Standalone
The default
user name The default
is “admin”. password is
“password”.
The graphical console looks like Figure 7-67. You can edit the console in many
ways. You can change the console look and feel, and adapt it to your
preferences, and you can create your own enterprise views, to arrange the
objects you want to administer, as you prefer. For more information about
changing the console, see 7.5.7, “Creating views” on page 385.
The graphical console has many panels. Here we provide a brief description of
each one:
Menu Bar : Menus and menu items change dynamically depending on your
current focus. Click each one of the options takes you to the required action.
Most of the menu bar options are also available in right-click “context” menus.
Console Toolbar: The main toolbar in the console provides quick access to
commonly used commands. The toolbar changes dynamically, depending on
what you have open in the console.
Administration Services users are not the same as the DB2 OLAP Server users.
You only need to create users on the Administration Server if they will be using
the Administration Services Console to manage DB2 OLAP Server. The existing
DB2 OLAP Servers users cannot use Administration Services until they
have been created as users on the Administration Server.
g. Click Next. If you want to associate the user with the OLAP Servers he will
access, complete the information required. To complete this step refer to ,
“Associating users with OLAP Servers” on page 378.
h. If you do not want to associate an OLAP Server, click Next again to
confirm the creation of the user.
i. In the next screen you have the option to create another user by checking
the box, Create another user.
j. Press Finish.
You must see the user created in the Navigation Panel. See Figure 7-70.
Now the user has been added in the Navigation Panel (Figure 7-73).
When you log into Administration Services, the Administration Server handles
your connections to DB2 OLAP Servers, applications, and databases. You do not
need to connect to individual DB2 OLAP Servers after your initial login to
Administration Services.
When defining which DB2 OLAP Server to associate with, you must inform the
name of the server and the user you want to use to connect to OLAP server. this
capability enables Administration Services users to connect to various DB2
OLAP Servers using a single sign-on.
Figure 7-74 shows an example of single sign on. The OLAP Administrator person
has two DB2 OLAP Servers to manage (A and B) with different users (olapcay,
olapsi). This administrator can create a unique user in Administration Services
(user=ibmuser) and he can specify that the user “ibmuser” will have access to
DB2 OLAP Server A (as the user olapcay) and DB2 OLAP Server B (as the
user olapsi).
user=ibuser - AS Administrator
Application Application
Onlinei ebank
Database Database
Onlinei ebank
user=olapcay user=olapsi
OLAP Supervisor OLAP Supervisor
To access the DB2 OLAP Server on localhost, Administration Services uses the
username “db2admin”. To access the DB2 OLAP Server on Sicily machine,
Administration Services uses the username “olapsi”. To access the DB2 OLAP
Server on the cayman machine, Administration Services uses the username
“olapcay”.
Three different
DB2 OLAP Servers
Hostname of the
Administration Server.
The above is an example of a single signon. The user ibmuser, connects only
once to Administration Services Console, using his username and password,
and the Administration Server connects him to all the OLAP Servers he is
associated with.
Based on the rights that the ibmuser has on DB2 OLAP Server, is the access to
the applications they have in Administration Services. For example, in
Figure 7-77, if the DB2 OLAP Server userid “olapcay” is a supervisor on DB2
OLAP Server, the user “ibmuser” (Administration Services user) can execute all
operations and commands on the OLAP Server Cayman. But if the userid
“olapcay” is an ordinary user it can only execute operations and commands
allowed by its privileges.
If you want to add more OLAP Server servers to the user ibmuser, follow the
steps explained above.
After defining the DB2 OLAP Servers, the Enterprise view in the Navigation
Panel is updated with the DB2 OLAP Servers. Figure 7-78 shows the OLAP
Server for the user ibmuser.
5. Then press Apply to apply the changes or Revert, to undo the changes.
6. Click Close.
It is recommended to do this after creating the users, but before starting to work
with Administration Services. Then, you can revert to the backup directory if you
have problems.
The view of the OLAP environment may look differently from one administrator to
another one. Administrators can customize the way to see the objects they are
administrating by creating their own custom views.
Attention: You cannot delete objects from the Enterprise View without
deleting them from the DB2 OLAP Servers they belong to. You can remove an
entire OLAP Server from the Enterprise View.
To create a custom view, choose the object to include in the custom view:
1. Right-click and select Add to —>New custom view from the pop-up menu.
2. Drag the object to the empty space next to Enterprise View tab at the bottom
of the Navigation Panel.
The console creates a new tab called by default MyView1. This new tab contains
the object you selected, and everything under that object (see Figure 7-81).
In the new custom view, you can see the object you selected and all the objects
under it, like the applications, databases, function, etc.
The console adds the object to the view as the last item in the tree.
You can also modify the look-and-feel of the console. Go to Tools —>Console
options and change the settings you want, like the font or the look-and-feel of
the windows.
Compressed
blocks
Uncompressed Compressed
blocks blocks
Buffered I/O and direct I/O are the two options you have to configure how DB2
OLAP Server will allocate the buffer cache to store the compressed blocks.
When you are using direct I/O, DB2 OLAP Server uses an area in the RAM
memory, the data file cache parameter, to store the compressed data (.pag files).
When you are using the buffered I/O, DB2 OLAP Server doesn’t use the data file
cache area in RAM, it uses the file system buffer cache (operating system). If you
are not using direct I/O, the data file cache is not used.
Disk
Disk
index index
In the buffered I/O access mode, the operating system is responsible for the DB2
OLAP Server data blocks compressed, in the direct I/O access mode, the DB2
OLAP Server is responsible for the data blocks compressed (data file cache).
Table 8-1 shows the platforms on which DB2 OLAP Server version 8.1 supports
direct I/O.
AIX Supported
The default buffer cache configuration option (buffered I/O or direct I/O) and the
way to configure it, vary according with DB2 OLAP Server version. See Table 8-2
for more information:
1 or 1.0.1 or 1.1 Buffered I/O Direct I/O is not available for these releases.
7.1 up to fixpack 7 Direct I/O Buffered I/O is not available for this release.
7.1 fixpack 8 and Buffered I/O You can change the DB2 OLAP Server to use
later direct I/O using the DIRECTIO TRUE setting in
the essbase.cfg. This change affects all
databases in this DB2 OLAP Server installation.
8.1 Buffered I/O You can change to use direct I/O for specific
databases. You can also change the
essbase.cfg (DIRECTIO TRUE) for all DB2
OLAP Server if you want.
When migrating from previous versions of DB2 OLAP Server to DB2 OLAP
Server version 8.1, the databases will use buffered I/O configuration, that is the
default for this new version (if you didn’t define the DIRECTIO TRUE setting in
the essbase.cfg). If you can use direct I/O for database migrated from previous
versions, you should change the database properties for the database.
8.1.4 Changing the I/O access mode (buffered I/O or direct I/O)
With DB2 OLAP Server version 8.1, the I/O access mode is a database setting.
To change the I/O access mode you can change the propriety for a specific
database:
Changing the I/O access mode using the Application Manager:
a. Connect to the DB2 OLAP Server.
b. Select the database you want to change the I/O access mode.
c. Go to Database —> Settings —> Storage tab.
d. Change the access mode to buffered I/O or direct I/O.
Changing the I/O access mode using MaxL:
a. login <username> <password> on <db2olap server hostname>;
b. alter database <appname.dbname> set io_access_mode <direct or
buffered>;
Example:
login olapsup olapsup1 on localhost;
alter database Onlinei.onlinei set io_access_mode direct;
Changing the I/O access mode using ESSCMD:
a. setdbstateitem 28 n (1=buffered, 2=direct)
b. login <username> <password> on <db2olap server hosname>;
Example:
setdbstateitem 28 2
After changing the I/O access mode option, it will only take effect the next time
the database is stopped and started.
Example 8-1 shows the output of the getdbstate command after changing the
I/O access mode from buffered I/O to direct I/O without stopping and starting the
database.
Example 8-1 Changing I/O access mode without restarting the database
Application name >onlinei
Database name >onlinei
---------Database State---------
I/O Access Mode (pending) : Direct
I/O Access Mode (in use) : Buffered
Direct I/O Type (in use) : N/A
Example 8-2 shows the output of the getdbstate after changing the I/O access
mode from buffered I/O to direct I/O and after stopping and starting the database.
Example 8-2 Changing the I/O access mode and restarting the database
Application name >onlinei
Database name >onlinei
---------Database State---------
I/O Access Mode (pending) : Direct
I/O Access Mode (in use) : Direct
Direct I/O Type (in use) : No Wait
You can also change the I/O access mode for all DB2 OLAP Server, using the
DIRECTIO TRUE setting in the essbase.cfg file in the server. After changing this
setting you should restart the DB2 OLAP Server service in order to take effect
this change.
Enabling this feature may improve performance because the system memory
manager does not need to swap and reserve space for the memory used by DB2
OLAP Server caches. Cache memory locking gives the DB2 OLAP Server kernel
Important: Cache memory locking can be used only if direct I/O is used as
the input/output setting for database.
Changing the cache memory locking option using the Application Manager:
a. Connect to the DB2 OLAP Server.
b. Select the database you want to change the I/O access mode.
c. Go to Database —> Settings —> Storage tab.
d. Check the “cache memory locking” check-box.
8.1.6 Defining the DB2 OLAP Server caches when using direct I/O
The size of the index cache and the data file cache (when direct I/O is used) are
the most critical DB2 OLAP Server cache settings. In general, the larger these
caches, the less swapping activity occurs; however, it does not always help
performance to set cache sizes larger and larger. You should monitor you system
activity and make sure that the machine is not paging.
When direct I/O is used, DB2 OLAP Server allocates memory to the data file
cache during data load, calculation, and retrieval operations, as needed. The
data file cache is not used with buffered I/O. How much of the data within data
files can fit into memory at one time depends on the amount of memory you
allocate to the data file cache.
To a fine tune of the DB2 OLAP Server caches, use the DB2 OLAP Server
hit-ratios (see “DB2 OLAP Server hit ratios” on page 264 for more information)
and refer to the IBM DB2 OLAP Server 8.1 Database Administrator’s Guide,
SC18-7001.
Database files (as reports, outlines, rule files and data files) that are located
on the database directory (ARBORPATH\app\<appname>\<dbname>)
You can migrate from an OLAP Server (IBM DB2 OLAP Server or Hyperion
Essbase Server) version to another. Table 8-3 shows the versions of DB2 OLAP
Server supported.
Table 8-3 IBM DB2 OLAP Server and Hyperion Essbase correspondences
DB2 OLAP Server version Correspondent with
The platforms supported by Security Migration Tool are the platforms supported
by DB2 OLAP Server, on both UNIX and Windows environments.
The Security Migration Tool specific directory contains the following files for
Windows:
– EssbaseXX.mdb
– MigReadme.html
– secmainXX.dll
– secmgr.exe
3. Enter the server login information for each of the servers and select the
security options to migrate.
If you are migrating the associated data directories, and migrating to or from a
UNIX server where the OLAP directory is not available on the filesystem,
enter the OLAP path in the UNIX format (as shown in Figure 8-4). If a server
is not available on the filesystem, the Security Migration Tool will use FTP to
copy supporting data files.
4. By choosing Yes as shown in Figure 8-5, you can also specify specific
applications to migrate.
5. If you chose to select applications to migrate, the Security Migration Tool will
list the applications available for migration as shown in Figure 8-6.
6. Click the Run the Migration button shown in Figure 8-7 to start the migration.
8. The following log files are created by this migration option and are stored in
the current Security Migration Tool directory:
– The connect.log shows the connection information for the servers
accessed (see the example in Figure 8-9).
3. Enter the information for the server to retrieve OLAP security data from as
shown in Figure 8-15.
5. If you chose to select applications to migrate, the Security Migration Tool will
list the applications available for migration as shown in Figure 8-17.
Note: The directory that contains the source server data files to migrate
must be on the filesystem. If the source server directory is not available on
the filesystem, then copy the entire /app directory of the source server onto
the filesystem. In this example, the /app directory was copied into
e:\temp\db2olap.
4. By choosing Yes as shown in Figure 8-22, then the applications that were
copied into the data file will be available for selection.
6. When the migration ends, the results are displayed as shown in Figure 8-24.
7. The following log files are created by this migration option and are stored in
the current Security Migration Tool directory (see examples in 8 on page 401):
This new feature is useful to research new systems without the need to use a
different server computer. For example you can have two DB2 OLAP Server on
the same machine with different fixpack level, for test purposes.
Important: It is not recommended to use more than one agent per computer
in production environments. This feature should be used in development and
test environments.
For each DB2 OLAP Server agent you want to run on the same machine, it’s
necessary an installation of the DB2 OLAP Server in a specific path (one DB2
To configure a second DB2 OLAP Server agent on a UNIX machine that already
has DB2 OLAP Server installed, follow these steps:
1. Create a new user id to run the new DB2 OLAP Server agent installation
2. Stop the ESSBASE service (the previous DB2 OLAP Server installation)
3. Execute the DB2 installation process in a different path. See DB2 OLAP
Server installation guide, SC27-1228 on installing on AIX, Solaris and HP-UX
operating environment for the procedure to install DB2 OLAP Server.
4. Create the necessary DB2 OLAP environment variables in the profile for the
new userid. To facilitate the process you may want to copy the profile from the
old DB2 OLAP Server installation user and change the paths for the new
installation. Don’t forget to change the ARBORPATH environment variable (in
the profile of the user created in the first step) pointing to the new installation
path.
5. Assign the correct privileges in the installation path to the new user id created
in the first step. The new user id should be the owner of the new installation
directory, subdirectories and files. If you used this userid in the installation
process it is already the owner of the installation directory, subdirectories and
files.
For example, in a AIX operating system you can execute the following
command to change the owner of the installation directory:
– chown <new olap user id> <new olap installation path>
And you can use the following commands in the AIX to change the
subdirectories and files owner on the installation path:
– cd <new olap installation path>
– chown -R <new olap userid> *
6. In the $ARBORPATH/bin directory of the new installation, create or modify the
server configuration file essbase.cfg. It should contain these settings:
– AGENTPORT: The port that this DB2 OLAP Server installation (agent) will use
to connect. The default is 1423.
– SERVERPORTBEGIN: The first port that the first process will try to use to
connect (connections to the DB2 OLAP Server). The default is 32768.
– SERVERPORTEND: The highest value for a port number that a process can
use to connect. The default is 33768.
– PORTINC: The increment between ports. For example, if PORTINC is
assigned a value of 5, DB2 OLAP Server will look for ports 32700, 32705,
and so on up to the value of SERVERPORTEND. The default is 1.
The Cayman is a UNIX operating system, AIX 4.3.3. The old DB2 OLAP Server
has the following characteristics:
– DB2 OLAP Server version: 8.1
– Installation path: /olap81
– AIX userid that has the DB2 OLAP Server environment variables
configured in the profile (.profile): olapcay.
– Profile of olapcay user (/home/olapcay/.profile) as shown in Example 8-3.
if [ -f /olap81/essbaseenv.sh ]; then
. /olap81/essbaseenv.sh
fi
if [ -f /olap81/essjava.sh ]; then
ask 'Which userprofile will you use, DB2 OLAP Server or DB2 OLAP Integration
Server?' ess is
case "$input" in
ess*|ESS*|e*|E*) c=/olap81/essjava.sh ;;
*) c=/olap81/is/hisjava.sh ;;
esac
. $c
fi
This profile has if commands to ask if you would like to start the DB2 OLAP
Server environment variables or the Integration Server environment variables. To
start the ESSBASE environment variables type ESS (this should be used to start
the ESSBASE service). To start the Integration Server environment variables
type IS (this should be used to start the Integration Server service olapisvr).
This is required because you should not set the Integration Server environment
variables (to work with JAVA), and the DB2 OLAP Server environment variables
at the same time. If you do this, you can receive errors (DB2 OLAP Server
exceptions), when trying to select a database in DB2 OLAP Server. Another
alternative is create two different users: one configured with the DB2 OLAP
Server environment variables and another configured with the Integration Server
environment variables.
if [ -f /olap81b/essbaseenv.sh ]; then
. /olap81b/essbaseenv.sh
fi
if [ -f /olap81b/essjava.sh ]; then
ask 'Which userprofile will you use, DB2 OLAP Server or DB2 OLAP Integration
Server?' ess is
case "$input" in
ess*|ESS*|e*|E*) c=/olap81b/essjava.sh ;;
*) c=/olap81b/is/hisjava.sh ;;
Using the Application manager in this DB2 OLAP windows client, we could
connect to the new DB2 OLAP Server installation:
server —> cayman
username—> olapcayb
We could visualize only the applications for the old DB2 OLAP Server
installation, including the applications onlinei and e-bank.
The Java Runtime Environment (JRE) is required by DB2 OLAP Server to enable
Java-based features, such as CDF. The JRE must be installed on the computer
running the DB2 OLAP Server component.
The DB2 OLAP Server installation program copies the files to your workstation.
To complete the installation of Java, you need to update your PATH statement:
On Windows 2000 and Windows XP, add this variable to your PATH statement:
%JREHOME%\bin
CPU considerations
CDF can significantly slow down calculations. The functions are loaded when an
DB2 OLAP Server application is started and will be processed by the JAVA
Virtual Machine (JVM). Since there is a hand over between DB2 OLAP Server
and JVM, it slows down the calculation execution. The CDF are typically 1.5 to 2
times slower than native DB2 OLAP Server Calculator functions even when the
CDF is developed properly.
A main recommendation would be to limit the use of CDF to functions that you
cannot perform with the invoked calculator, particularly if the calculation speed of
the applications is a critical consideration.
Memory considerations
It has initial effect to the memory required to run DB2 OLAP Server using the
Java Virtual Machine (JVM) and the Java API for XML Parsing, which is a part of
the general installation of DB2 OLAP server has to be checked. The memory
requirements for these additional components are approximately 10MB per
application.
Beyond these start-up memory requirements, the Java programs you develop for
CDF sometimes require additional memory. When started, the JVM for Win32
operating systems immediately allocates 2 MB of memory for programs. This
allocation is increased according to the requirements of the programs that are
then run by the JVM. The default upper limit of memory allocation for the JVM on
Considering the default memory requirements of the JVM and the limitations of
the hardware on which you run servers, carefully monitor your use of memory. In
particular, developers of CDF should be careful not to exceed memory limits of
the JVM when creating large objects within CDF.
If you install Java but are not using CDF or custom defined macros, you can
reduce your startup memory requirement by disabling Java. To disable Java,
remove all parameters from the JVMMODULELOCATION setting in the
essbase.cfg file or make the statement to a comment by setting a ; (semicolon) in
front of the statement as shown in Example 8-5.
The essbase.cfg file is stored in the arborpath\bin directory and is a JVM file type
(see Example 8-5).
You can create more than one method in a class for use as a CDF. In general, it is
recommended that you create all the methods you want to use as CDF in a
single class. However, if you want to add new CDF that are not going to be used
When creating multiple Java classes that contain methods for use as CDF, verify
that each class name is unique. Duplicate class names will cause methods in the
duplicate class not to be recognized, and you will be unable to register those
methods as CDF.
After creating the Java classes and methods for CDF, test them using test
programs in Java. When you are satisfied with the output of the methods, install
them on DB2 OLAP Server and register them in a single test application - see
“Registering a CDF Using MaxL” on page 426 or “Using Application Manager” on
page 429. We recommend that you do not register functions globally for testing.
Instead start registers the CDF in a single test application. Afterwards you can
register the functions globally if necessary.
A class method may be registered with the DB2 OLAP Server calculator
framework under many different names, whereas function names in the
framework must be unique within a particular application. Locally defined
functions within an application may have the same names as global (system
scope) functions. In this case, the local function overrides the global functions.
If the specified Java class contains more than one method with the same name,
you must specify the signature for this method (if the signature is not specified,
JVM uses the first method with this name). The signature is a specification of
types of input parameters to a method, as well as of the returned type. If only one
method exists with the specified name in the specified class, the calculator
framework automatically determines the signature of the method using Java
Reflection API. Otherwise, it automatically determines the return type, but needs
specification of the input types.
The signatures of the Java functions used as CDF must use only the following
types:
In addition to the specified types, the return type may also be of type void. All of
the primitive Java types are mapped to the NUMBER type in DB2 OLAP Server
calculator framework. Strings are mapped to STRING type. CalcBoolean is
mapped to BOOLEAN type.
To provide you with all the necessary means to perform arithmetic operations as
add, subtract, divide, multiply and compare double precision values you have the
com.Essbase.calculator. class. This class has rules to deal with missing values
and are using the same logic as the calculator framework.
As the conversion between the NUMBER type and the primitive Java types (other
than double) assumes loss of precision, you should be cautious about choosing
the functions' input parameters types. Using double as the analog of NUMBER
type is highly recommended, especially since calls from the calculator
framework-to-functions, that take only double parameters, are highly optimized.
The Java code developer must be sure that the double values that are returned
to the DB2 OLAP Server calculator framework from the JVM are finite (or
#MISSING). Some tools might not correctly display values that are either infinite
or NaN (not a number). The programmer is responsible for ensuring that this rule
is observed. The Calculator class may be helpful in treating various functions and
operators according to DB2 OLAP Server calculator rules (no calculations in DB2
OLAP Server ever produce infinite values or NaNs. The result, in cases like
division by zero or square root of a negative number, is always #MISSING).
In the case when the input NUMBER is #MISSING, a special MISSING value for
the Java primitive type is generated (this value is specific to each type) with the
exception of Boolean, in which case #MISSING is mapped to FALSE. This is why
it is recommended to use CalcBoolean for passing Boolean values to the JVM.
Note: Be sure that the path to the compiler (javac.exe) is defined properly.
As with the compiler, the path to the jar.exe has to be defined properly.
Registering a CDF
Before you are able to use a CDF within DB2 OLAP Server, it has to be
registered.
Registering requirements
Registering requires to have taken in consideration both security and to have
correctly named the CDF.
Security permissions
To register a CDF on DB2 OLAP Server requires that you have one of these
security permissions:
Application designer or higher to create, delete and manage CDF on an
application level (local).
Supervisor to create, delete and manage CDF on server level (global).
When you register a CDF in DB2 OLAP Server, you give the function a name.
This name is used in calculation scripts and formulas and is distinct from the
Java class and method name used by the function.
If a DB2 OLAP Server application contains a local function that has the same
name as a global function, the local function is used for calculation.
The prefix Sample. before the name of the function assigns the CDF only to
the application Sample, so the function will be available only within that
application.
If the application is not loaded, the CDF will become available by the next load.
The statement re-reads the CDF on the Agent, and associates newly created
functions with the specified application (since the last refresh, or since the last
time the application was restarted). Invalidly defined functions are not loaded to
the application.
To refresh global definitions, issue the statement separately for each application
on the DB2 OLAP Server or stop and restart the DB2 OLAP server.
Validation occurs at the application level only, during the refresh and not during
creation. There is no validation on the system level.
You could combine the login, the create function statement and the refresh
statement in one single MaxL script. Keep in mind to end each statement with a
semicolon. Example 8-12 shows how registering the CalcFunc can be done with
a MaxL script.
If the file in the example is stored as regcalcfunc.txt, you can execute it from the
operating systems command prompt by typing:
essmsh regcalcfunc.txt
To manage your CDF you have several MaxL statements available. We will briefly
describe these statements and give examples of their use.
Deleting a CDF
To delete a CDF, you can use the MaxL statement:
drop function function name;
If you want to delete the function @JSUM in the application Sample, the
statement looks like Example 8-13.
If it is a global CDF you do not prefix the function with the application name.
As with registering a CDF the drop function statement needs a refresh of the
application or a stop and restart to take effect, see the part about register CDF
for further details.
Displaying a CDF
You can display CDF or macros in four different ways, depending on which syntax
you use in the display statement.
The all keyword used in the display statement displays all custom definitions,
including those registered on the application level (local) and on the system level
(global).
Specifying the function or macro name in the display function or display macro
statement displays only the named custom definition.
If you want to see all functions registered to the application Sample, the
command looks like Example 8-15.
+-------------------+-------------------+-------------------+------------------
+-------------------+-------------------+-------------------+------------------
4. Select a server to which you are connected, where the CDF will be used. If it
is a global CDF you are about to register, select the value <global> in the
application selection box or keep it as it is, since you can change that in the
next window.
If the CDF is not global, use the Application selection box to change to the
application where the CDF will be used. You are not able to change to any
other application further on in the process.
– Click New to register a new CDF, and DB2 OLAP Server displays the
Custom Defined Functions Editor, as shown in Figure 8-26.
– In the selection box scope, select the appropriate scope for the CDF:
<Global> if the CDF will be available for all applications on the server, or
select the application name you choose in the custom-define functions
manager, if the CDF will be available for only one that application.
– If the CDF has to be available for more than one application, you have to
register the function into all the applications one by one, where it is
needed. In this example we register the CDF into the application Sample.
Note: There is no control if the Java Class or Method exists. The way to
see if a function is valid and available for use in calculations or formulas
is described in “Verifying a CDF” on page 433.
Verifying a CDF
After registering a CDF, you can determine whether a function has been
registered successfully and whether it is registered locally or globally. Until you
do a refresh or stop and restart the application the state of the function remain
unknown. If it is global function the server has to be stopped and restarted.
Tip: DB2 OLAP Server cannot determine whether the function defined in
JAVA is active. You may need to refresh the CDF catalog, or shutdown and
restart the application or applications, to tell if the function is loaded and
active.
In the lower left corner of the Outline editor you can choose which member
the formula has to be calculated on. In the figure above we have chosen to let
the formula calculate on Market/East/New York.
4. Choose the function Formula->Paste functions and you get a list of function
templates as shown in Figure 8-30.
The function template screen is split into two parts, the categories selection
window and the template selection window. The template selection window
shows the available functions in the category you choose.
CDF is the last entry in the categories.
5. Choose the CDF to use, for example, @JSUM in Figure 8-31.
You can either type in the functions or use the buttons in the top of the screen.
6. Before you save and close the formula editor, use the specific button to verify
your formula:
7. Resolve any errors, save the formula, and close the Formula editor. The
formula has now been attached to the member “New York” as shown in
Figure 8-32.
Part 5 Appendixes
In this part of the book, we supply several appendixes containing the following
supplementary material:
Appendix A, “DB2 OLAP Integration Server” on page 441
Appendix B, “Setting up DB2 OLAP Analyzer Analysis Server V8.1” on
page 463
Appendix C, “Enterprise Services sample programs” on page 489
Appendix D, “Data modeling basics” on page 495
The functionality covers new features available from DB2 OLAP Server version
7.1 fixable 7 and DB2 OLAP Server version 8.1.
One of the primary enhancements are the improved performance and flexibility.
For Integration server, this means that you get:
Performance:
– A better SQL engine:
• Options for customizing the SQL that Integration Server generates
– A better “group by” statement
– Removal of joins:
• No more redundant joins
• Intelligent removal of joins
– Parallel data load:
• Fetch data
• Transform data
• Send data to DB2 OLAP Server
• All in parallel
Flexibility:
– The SQL Override function lets you override the SQL generated by
Integration Server with your own SQL, including ODBC SQL, native SQL,
and stored procedures.
– You can write and use single or multiple data load SQL.
– You can now use drill-through SQL or template SQL to override the
automatically generated SQL associated with drill-through reports.
The new Hybrid Analysis functionality which enables the OLAP application to
have part of the data loaded or calculated in the multidimensional part and other
data residing in a relational database. This gives some clear advantages, for
example:
Increased scalability
Complement to Integration Server Drill-Through Reports
A whole new organization of data source and multiple data sources now gives a
better overview of databases, view and users:
Formula validation
Integration Server new configuration file — eis.cfg — now enables you to apply
settings to the Integration Server. This is covered in A.2.1, “New configuration file
for Integration Server” on page 444.
Merant 4.0 drivers are now shipped with Integration Server to enable better
access to different things by ODBC sources.
With this new version of DB2 OLAP Server, it is now more than simply a cube
builder and is assuming more function with each release. With the version 8.1 of
DB2 OLAP Server you can build applications split between multidimensional and
relational storage, source data from multiple relational databases and customize
your Business Intelligence environment so it fits the business users needs.
Integration Services runs this file, eis.cfg, which in turn automatically sets the
configurations for you. This process eliminates the need for manually entering
configurations each time you perform a member or data load.
[B]=100
[C]=10000
[E]=C:\IBM\db2olap\logdir\
To use the new data source, simply drag and drop tables from the source in the
same way as you do for the original data source.
Important: You can not join tables from different sources to anything other
than the fact table.
In the Data Source Properties window you get information about what data
source you are accessing (Name), User Name, Data Source Type, whether you
are connected or not (Connection Status), Whether the data source is in use or
not (In Use Status), and the dimensions in use from the specific data source.
In the dimension properties you see all available data sources in the drop down
Data Source listing. Be aware that choosing another data source may change
the ability to access data, if the table in the new data source does not exist
equivalent to the one in the data source prior to the switch.
The Dimension properties dialog has been extended to let you see, and select
where applicable, the data source available:
Data Sources are listed in the left portion of the Model tool, as shown in
Figure A-7.
Within a data source, relational objects are categorized and grouped by
schema/owner
– Tables
– Views
– Synonyms
It was always hard to remember where the views are or how you displayed them.
Now the data source listing at the left side has been reorganized and shows
tables, views and synonyms, for all data sources.
Navigation has improved significantly and you have now easier access to browse
the different data sources. The options to show tables, views and synonyms
have been removed from the View menu.
Update Drill-through Data is a new function, which is found in the Outline menu.
There is no need to do a member or data load. The dialog is similar to that for
member and data load.
By creating a “dummy” OLAP cube, the verification function tests the formula
against the cube for any possible syntax or structural irregularities.
You can now use the power of drill-through reporting without having first created
your cube using Integration Services.This means that existing cubes, such as
those created by Application Manager, can be extended to allow greater data
reach, including detail data, non-numeric name lists, and so on.
Figure A-14 Start with the Application Manager outline for drill through reports
2. Then you create the model using Integration Server. Using Integration Server
creates a simplified Integration Server model. The model only needs to
contain the fact table and dimensions that will be used in the drill-through
reports OLAP intersection level. In Figure A-14, we see just a few tables used.
3. Then you use Integration Server Metaoutline to create a metaoutline. The
metaoutline only needs to contain dimensions that will be used in the
drill-through reports OLAP intersection level. Hierarchies need only contain
lowest level at which OLAP intersection level starts. Make sure all
transformations are built into the hierarchies.
4. Finally, you create drill-through reports in the normal way, as shown in
Figure A-15.
This support eliminates the need for database client software and thus
significantly facilitates the installation of Integration Services on the UNIX
operating system. Support of Oracle native drivers also enhances performance
on UNIX platforms.
Using the File menu, you can choose to Export an existing model or metaoutline
as an XML file (see Figure A-16), or import a model/metaoutline from an existing
XML file. This could be a backup version of an existing object, or one created on
another system and sent to you.
XML is human readable and editable with any text editor, or specialized XML
editor.
Clicking the icons of the major steps displays text in the area below. The left pane
has a list of actions, and clicking one of these displays details in the right pane.
Sometimes an action can be launched from the right panel, for example, to
auto-detect a fact table from the current data source.
Important: Analyzer V8.1 can not use the previous version repository.
Component Details
Table B-2 shows the different compatibilities and dependencies required to run
Analyzer V8.1.
Component Details
An HTML based client is also available which is pure HTML. This results in a
much better and faster performance as compared to the Java clients. The HTML
client supports forms and allows users to work interactively with data.
On the Analyzer v8.1 Refresh Server Windows CD-ROM, under the folder called
websphere35\Nt, double-click on setup.exe to start the installation of WebSphere
Application Server 3.5 base code.
6. For Database options, choose DB2 for Database Type, then fill in the
pertinent information for DB2 and click Next (see Figure B-7)
Note: If this zip file is not present, please go to the following URL to
download the zip file. The zip file is dated Oct 31, 2001 with a file size of
57371 KB:
ftp://ftp.software.ibm.com/software/websphere/appserv/support/fixpacks/
was35/fixpack5/WIN/
To start up the IBM HTTP Server, go to Start -> Program -> IBM HTTP Server
-> Start HTTP Server.
The following list provides an overview of the components needed for running the
installation procedure:
Setup_win32.exe: setup the executable
Setup_win32_debug.cmd: launches setup log console.
Setup.jar: installation code
Jniutil.dll: specific to Windows
hyperion\analyzer directory: Hyperion Analyzer ADM driver files
\jdk1.3_win32: Java Runtime Environment for the Installation
\was35_std_ptf_5.zip: Fixpack 5 for WebSphere 3.5
\websphere35 : WebSphere 3.5 installation
\db2: DB2 UDB Personal Edition V7.2
\xml_config: XML files to deploy Hyperion Analyzer into WebSphere
\documentation
\java_plug-in_1_3_0_02: java plug-in 1.3.0.02 installation
\repository: JDBC and SQL scripts
If DB2 UDB Enterprise Edition is on a remote machine (see Figure B-11), you
need to create the database ANALYZ60 first and authorize the user Analyzer
to this database.
Then pick the DB2 UDB V7.2 (Remote Workgroup Edition, Enterprise Edition,
Extended_Enterprise Edition) option as well as the option:
– Create repository.
7. The next step is to configure the DB2 connection information. The following
information is needed for a local connection (see Figure B-12) though most of
the information is filled in by default.
– DB2 Server: hostname of machine: This is automatically filled in and
cannot be changed.
– DB2 Database Name: Name of the database for Analyzer which is
ANALYZ60. This is automatically filled in and cannot be changed.
– DB2 Administrator Name: Username with authority to connect to DB2.
– DB2 Administrator Password: Password used in conjunction with the
username (DB2 Administrator Name).
9. Now you will see the summary panel, which shows what is about to be
installed. click Next to actually start the installation.
When the installation is completed, there is an option to display the log file
and also, the login system user information is given along with the URL to
launch Analyzer. The URL is as follows:
http://hostname/Analyzer6_Server/webapp/Analyzer6/index.html
In addition to the information stated under the heading "After Installation
Checklist" on pg.19 of the Analyzer Installation Guide, please note the
following. These steps are to be completed once Analyzer has been installed:
Prior to launching Analyzer, there are a couple more things that need to be done
if the edition of DB2 UDB v7.2 is not Personal Edition.
1. Once the Analyzer install is done, go back to the CD-ROM and open the
folder called Java_Plugin_1_3_0_02.
Analyzer API Toolkit Jump Start enables developers to extend the out-of-the-box
Java Web Client or to build their own custom Web applications leveraging the
flexibility, ease-of-use and analytic power of Hyperion Analyzer. This Jump Start
Guide introduces concepts necessary to begin using the API Toolkit, as it
provides live sample applications.
Furthermore, the traces toggled on are shown in the lower part of the
Administration Console window.
We also provide the script to run the sample programs: runsamples.cmd script.
import com.essbase.api.base.*;
import com.essbase.api.session.*;
import com.essbase.api.datasource.*;
import com.essbase.api.dataquery.*;
import com.essbase.api.domain.*;
/**
CopyOlapAppAndCube copies olap application/cube from one server to another.
} catch (EssException x) {
System.out.println("Error: " + x.getMessage());
} finally {
// Sign off from the domain.
try {
if (ess != null && ess.isSignedOn() == true)
ess.signOff();
} catch (EssException x) {
System.out.println("Error: " + x.getMessage());
}
}
}
set ESS_ES_HOME=c:\itso\essbase\ees
set JAVA_HOME=%ESS_ES_HOME%\jre
set THIRDPARTY=%ESS_ES_HOME%\external
set CLASSPATH=%ESS_ES_HOME%\lib\ess_japi.jar;
set WEBLOGIC_HOME=C:\program_files\weblogic
set USER=system
set PASSWORD=password
set DOMAIN=essbase
set EES_SERVER=localhost
set OLAP_SERVER=cayman
:tcpip
set ORB=tcpip
set PORT=5001
goto run
:http
set ORB=http
set PORT=7001
goto run
:corba
set ORB=corba
set PORT=0
set
CLASSPATH=%CLASSPATH%;%ESS_ES_HOME%\lib\ess_es_server.jar;%THIRDPARTY%\visibrok
er\vbjorb.jar;
goto run
:ejb
set ORB=ejb
:run
echo Step-1: Ready to compile all the examples ...
pause
%JAVA_HOME%\bin\javac *.java -d .
echo Step-3: You can include to run the rest of the examples in similary way.
goto end
:usage
echo Usage: runsamples [tcpip or corba or http or ejb]
echo where
echo tcpip - use tcpip to run the samples.
echo corba - use corba to run the samples. You need to download
your own copy of Visibroker.
echo http - use http to run the samples. You need to have run
EES in a web container before.
echo ejb - use ejb to run the samples. You need to have run EES
in a EJB container before.
goto end
:end
pause
The two data modeling techniques that are relevant in a data warehousing
environment are ER (Entity Relationship) modeling and dimensional modeling.
ER modeling produces a data model of the specific area of interest, using two
basic concepts: entities and the relationships between those entities. Detailed
ER models also contain attributes, which can be properties of either the entities
or the relationships. The ER model is an abstraction tool because it can be used
to understand and simplify the ambiguous data relationships in the business
world and complex systems environments.
Entity
Product_id Product_id
Prod_Description
Picture
x Model_id
Retail_Price
Attribute Relationship
x
Comp_id Product_id
Comp_description
Cost x Model_id
Comp_id
Number_components
Star schema
This has become a common term used to connote a dimensional model.
Database designers have used the term star schema to describe dimensional
models because the resulting structure looks like a star and the logical diagram
looks like the physical schema. The star model, as shown in Figure D-2, is the
basic structure for a dimensional model. It typically has one large central table
(called the fact table) and a set of smaller tables (called the dimension tables)
arranged in a radial pattern around the fact table.
Product Customer
Product_id Customer_id
Prod_Description Customer_Name
Product_Family Customer_Segment
Fact
Product_id
Time_id
Customer_id
Branch_id
Quantity
Organization Time
Sales_Total
Area Time_id
Region Year
Branch_id Quarter
Branch_Name Month
The snowflake model, as shown in Figure D-3, is the result of decomposing one
or more of the dimensions, which sometimes have hierarchies themselves. We
can define the many-to-one relationships among members within a dimension
table as keys that reference the dimensions.
Family_id Segment_id
Family_Name Segment_Name
Fact
Product_id
Product Time_id Customer
Customer_id
Product_id Branch_id Customert_id
Prod_Description Quantity Customer_Name
Family_id Sales_Total Segment_id
Time
Time_id
Banking Location Banking Year
Quarter
Location_id Branch_id Month
Location_Name Branch_Name
Location_id
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see “How to get IBM Redbooks”
on page 500.
DB2 OLAP Server: Theory and practices, SG24-6138-00
LDAP Implementation Cookbook, SG24-5110
Other resources
These publications are also relevant as further information sources:
IBM DB2 OLAP Server 8.1 Quick Path, SC18-7000
IBM DB2 OLAP Server 8.1 Database Administrator’s Guide, SC18-7001
IBM DB2 OLAP Server 8.1 Integration Server Data Preparation Guide,
SC18-7006
IBM DB2 OLAP Server 8.1 Integration Server Administration Guide,
SC27-1227
IBM DB2 OLAP Server 8.1 Spreadsheet Add-in User’s Guide for 1-2-3,
SC27-1231
IBM DB2 OLAP Server 8.1 Spreadsheet Add-in User’s Guide for Excel,
SC27-1232
IBM DB2 OLAP Server 8.1 Installation Guide, SC27-1228
IBM DB2 OLAP Server 8.1 MaxL User’s Guide, SC27-XXXX
IBM DB2 OLAP Server 8.1 SQL Interface Guide, SC18-7004
Hyperion Analyzer Release 6 .1 Product Overview
Hyperion Analyzer Release 6.1 Administrator’s Guide
Hyperion Analyzer Release 6.1 Getting started
Hyperion Analyzer Release 6.1 Installation Guide
Index 503
remote activation 216 getdbstats 241, 266
sample programs 125 GETPERFSTATS 266
settings 215 grant permissions 292–293
start 176 grid 79
stop 176 group
synchronization 217 delete 289
users creation 184 rename 290
Enterprise Services Command Shell 124, 179 group’s user membership 290
Enterprise Services Console 124, 177, 183 groups 294
Enterprise Services Server 124 create 288
Enterprise View 288–290, 292, 299, 309, 314, 317, guidelines 41, 358
332, 385
Essbase Client v6.51 284
Essbase Java API 25
H
HAENABLE 61, 69
Essbase Spreadsheet Services 122, 127
HAMAXNUMCONNECTION 47, 61, 64
Essbase system files 306
HAMAXNUMSQLQUERY 65
essbase.cfg 59, 61, 68, 233, 275, 307
HAMAXQUERYROWS 62, 65
essbase.properties file 152–154, 173
HAMAXQUERYTIME 62, 66
essbase.sec 26
HAMAXSQLQUERY 62
ESSCMD 231, 237–238, 241, 258
HAMEMORYCACHESIZE 61–62
EssCreateFilter 17
HARETRIEVENUMROW 62, 67
EssListCalc 16
hash tables 232
EssListLogins() 17
high availability 3, 7
EssSetFilterRow 17
high concurrency 3, 120
EstimateFullDBSize 248–249
High Concurrency Option 7
exception processing 74
hit ratio 245, 265, 319, 338, 395
execute processes 342
data cache 264
export
HTML 467
parallel 3, 271
HTTP 122, 124, 126
serial 271
Hybrid Analysis 3–4, 24, 41, 43, 83, 445, 464
external authentication 18, 288, 301
architecture 45
backup 47
F build an hybrid cube 49
fact table 60 calculation 54
failover 3, 25, 120, 137, 151 time 48
failure 7 configuration settings 68
conditions 153–154 connections 47
fault tolerance 152 cube 4
custom actions 153 data storage 54
filters 293, 334 debugging 87
firewalls 124 deploying 45
FIX command 235 disable 56–57
Formula editor 328 enable 56–57, 61
formulas 231 filters 48
framework 3 formulas 57
guidelines 45
levels 71
G limitations 48
getdbstate 394
Index 505
maximum capacity 145 new users request 145
MaxL 13, 16, 230, 279
export 12, 271, 273
Script Editor 25, 282, 350
O
objects manipulation 13
scripts 282
ODBC 24, 83
member caching 59, 73
OLAP cube 43
member combinations 76
OLAP Metadata Catalog 24
member information 324
OLAP Miner 5, 22, 90
membership 290
architecture 108
memory 307
configuration 113
Merant 4.0 drivers 443
magnitude 94
metadata 24, 46, 443
monitoring 110
metaoutline 24, 46, 59, 452
performance 95
middle tier 13, 279–280
platforms 110
migrate 396
requirements 98
across OLAP servers 294
running 110
application and databases 304
scenario 99
applications and databases 282, 333
stopping 111
users
synchronization 112
individual 297
wizard 108
multiple 294
OLAP Miner Client 90, 109, 113
users and groups 281
OLAP models 24
migration 7
OLAP outlines 24
automatically 14
OLAP security data 14
Migration Wizard 298, 333
OLAP Server services interruption 152
mining 103
OLAP Servers
model 24, 46, 59, 445
add 7
modeling
clustering 6
dimensional 495, 497
connection pooling 6
ER 495
failover 6
snowflake 498
groups 6, 173
monitor 13, 265, 338
high availability 6
disk usage 338, 341
high concurrency 6
memory 338
workload balancing 6
remotely 340
OMRunDef 113
monitoring tools
OMServer.cfg 96, 111
operating systems 266
Online Investment application 31, 49, 55, 95
multi threaded 8, 223–224
operating system 307, 338, 340
multidimensional database 4
Oracle 83, 123
multiple DB2 OLAP Servers 304
outline
multiple machines 6
add a child 323
multiple relational data sources 445
properties 326
multiple threads 11, 269
output splitting 11
multiple users connections 308
multiprocessor 8, 10, 224
P
PARAEXPORT 273, 277
N parallel 3
network protocol 306
parallel calculation 8
Index 507
Test Bench component 76 stress 78
script subdomain 182
editors 349 substitution variable 18
scripts subtasks 224
editors 25 suffix
Security 306 transformation rules 54
security 18, 120, 160, 281, 290, 294, 301 sync olap 112
data 396 SyncCubeReplicas 214
file 18 synccubereplicas 194, 260
synchronizing 166 synchronization 160, 230, 279, 284
Security Migration Tool 7, 14, 26, 389, 396 syncsectoees 194
hardware platforms 14
implementation 397
installation program 15
T
TCP/IP 36, 122, 124–125, 166
options 15
terminology 43
platforms 396
test environment 36
Security Migration tool 294, 334
testing 304
server start time 307
threads 10
SERVERPORTBEGIN 17
three-tier 280
SERVERPORTEND 17
throughput 8
service component names 138, 147, 151
time balance 325
servlet 125
toolbar 323
SET CALCPARALLEL 233
transfer data across platforms 11
SET CALCTASKDIMS 234
transformations 54
SET MSG ONLY 248
transparency access 135
Simple Mail Transfer Protocol 357
transparent server 120
simulating
multiple users 75
simultaneous connections 136 U
simultaneously 281 unbalanced workload 147
single logical unit 130 unexpected values 3
single point of administration 12 UNIX 294
single point of control 3 Unix 294
single sign-on 25, 120, 281 user
single-threaded 8 create 301
SMTP 357 user interactions 75
snowflake 83, 498 user interface 13
sparse dimension 44, 228, 234, 241–242, 247 user sessions 12, 281–282
sparse member 262 users 294
spreadsheet add-in 22, 90, 121 create 287
spreadsheets 99, 108 delete 289
SQL Interface 18 rename 289
SQL Override 442 users requests 134
SQL queries 4
SQL Server 123
V
ssbase.properties file 153 values
stand alone 285 deviant 5
star schema 23, 60, 83, 497 variables 307
statistics 307
W
WebSphere Application Server 468
Windows 294
wizard 12, 279, 298
mining definition wizard 108
workload
balanced 146
distribution 147
unbalanced 147
workload balancing 3, 25, 120, 131, 137–139, 166
workload distribution 140
X
XML 24, 115, 441, 459
import/export metaoutlines 459
import/export models 459
Index 509
510 DB2 OLAP Server V8.1: Using Advanced Functions
DB2 OLAP Server V8.1:
Using Advanced Functions
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
Back cover ®
Enhancing OLAP This IBM Redbook explores DB2 OLAP Server V8.1
cube scalability and enhancements and strengths in the areas of scalability,
INTERNATIONAL
discovering deviant performance, high concurrency, high availability, and TECHNICAL
values administration, and explains how to deploy them in a whole SUPPORT
enterprise environment. ORGANIZATION
Implementing high We provide administrators and implementers with practical
concurrency and guidelines based on a case study on the new advanced
high availability functions in DB2 OLAP Server V8.1. We discuss Hybrid BUILDING TECHNICAL
scenarios Analysis and its ability to conciliate and integrate data from INFORMATION BASED ON
OLAP cubes and data warehouses, as well as the ability of PRACTICAL EXPERIENCE
OLAP Miner to detect deviations and discover anomalies and
Managing multiple
special segments from the wealth of data already analyzed
OLAP servers IBM Redbooks are developed by
within OLAP cubes.
the IBM International Technical
We also consider the capabilities of DB2 OLAP Server Support Organization. Experts
Enterprise Services or the High Concurrency Option to provide from IBM, Customers and
availability and reliability, including cube clustering, failing Partners from around the world
create timely technical
over and connection pooling. Performance enhancements information based on realistic
and the ability to run operations in a multithreaded mode as scenarios. Specific
parallel load, calculation and export, are covered, as well as recommendations are provided
DB2 OLAP Server Administration Services and the ability to to help you implement IT
manage multiple OLAP Servers from a single point of control. solutions more effectively in
your environment.
If you are an administrator or OLAP designer concerned with
DB2 OLAP Server, this book will help you to evaluate the
applicability of its new functions and show you how to start
implementing them. For more information:
ibm.com/redbooks