20170317DB2Night190 - DB2 LUW V11 Certification Training - Part 1
20170317DB2Night190 - DB2 LUW V11 Certification Training - Part 1
20170317DB2Night190 - DB2 LUW V11 Certification Training - Part 1
www.EnterpriseDB2.com
Agenda
Why Certification?
Sample Questions
Why Certification?
Incredibly Valuable in
Expand your Knowledge Create you Network
Consulting Services
C2090-616
• DB2 10.5 • DB2 11.1 DBA for
Fundamentals for LUW
LUW • DB2 11.1
Fundamentals for
LUW
C2090-615 C2090-600
C2090-600: IBM DB2 11.1 DBA for LUW
SHEAPTHRES_SHR = 1266237
SORTHEAP = 63311
DATABASE_MEMORY = AUTOMATIC (9729422)
DB2 BLU Implementation Techniques
1. Create a DB2 instance
2. Set the registry variables
DB2COMM=TCPIP
DB2_WORKLOAD=ANALYTICS
This aggregated registry variable implicitly sets
DB2_WORKLOAD=ANALYTICS
DB2_USE_ALTERNATE_PAGE_CLEANING=ON [DB2_WORKLOAD]
DB2_ANTIJOIN=EXTEND [DB2_WORKLOAD]
3. Stop and start DB2 instance
DB2 BLU Implementation Techniques … Cont.
4. Set INSTANCE_MEMORY
Set INSTANCE_MEMORY to <PAGES> where <PAGES> is 90% of the server memory if the
server dedicated to only one columnar database instance and has 128GB RAM or more.
If the server is multiple instances of DB2, select an appropriate percentage based on the
workload. What is an appropriate number? Select a percentage that will use the available RAM
without causing system paging and monitor the system paging on the server.
DB2 BLU Implementation Techniques … Cont.
4. Set Utility Heap to a Large Number
5. Before you start building databases, consider these points. BLU and pureScale require
databases with ASM enabled. Both also require Unicode code sets and IDENTITY or
IDENTITY_16BIT collation. For column-organized tables, you must define table spaces with
automatic space reclaim enabled.
Set DATABASE_MEMORY in both the row database and in the column database to “COMPUTE”.
Review the computed amount of each to make sure the setting sum of both is less than
INSTANCE_MEMORY * 80%.
DB2 BLU – Statistics Collection
For column-organized tables, both compression and statistics are managed automatically and
internally. It is not possible to turn them on or off. This is why when the compression dictionary
or statistics are being initially seeded, it is important to process the entire data set.
In some cases, the statistics associated with RUNSTATS are collected, and in other cases they
are not. Using table functions, it is possible to determine whether statistics collections are
executing or are waiting to run.
Real-time statistics collection (RTS) output is stored in the statistics cache and can be used by
database agents for subsequent statements. The cached statistics are later written to the
database catalog by a daemon process in servicing a WRITE_STATS request.
DB2 BLU – Statistics Collection… Cont.
A COLLECT_STATS request can be issued by a db2agent process during a statement
compilation and the status can be queried using the below listed SQL statement
SELECT QUEUE_POSITION,
REQUEST_STATUS,
REQUEST_TYPE,
OBJECT_TYPE,
VARCHAR (OBJECT_SCHEMA, 10) AS SCHEMA,
VARCHAR (OBJECT_NAME, 10) AS NAME
FROM TABLE (MON_GET_RTS_RQST()) AS T
ORDER BY QUEUE_POSITION, SCHEMA, NAME;
There are three possible statuses for REQUEST_STATUS: EXECUTING, QUEUED, or PENDING.
At most, you can have one table with EXECUTING status. RTS checks for PENDING requests
every five minutes and places the requests on the run queue.
DB2 BLU – Space Management
DB2 has for a long time performed what is called logical deletion of rows. This is different
from pseudo deletion, because in logical deletion, the space occupied by the deleted row on the
data page can be overwritten with an inserted row, while pseudo deleted rows are not available
until additional cleanup operations are performed.
For column-organized tables, data is pseudo deleted; for row organized tables, it is logically
deleted. Thus, for column-organized tables, space reclaims are performed at the extent level,
and the extent space is returned to the table space for use with any defined table.
To build a mix of row- and column-configured databases, begin with the instance configured
for row. Build the row-based databases you need, set the DB2_WORKLOAD=ANALYTICS
registry setting, and then restart the instance and create the database configured for column.
When running a DB2 instance with row- and column-organized databases, configure the
database manager and database configuration settings for column workloads. Row workloads
(OLTP) will run fine in a column-configured DB2 instance, but not vice versa.
DB2 BLU – Extended to MPP
• BLU DPF extends BLU Acceleration into a true MPP column store
• Data exchange during distributed joins and aggregation processing occurs entirely within the
BLU runtime in native columnar format
• Compressed Communications
DB2 BLU – Extended to MPP …Cont.
• Load and Go Simplicity
• It auto detect and adapt to available memory, cores and cache
• Optimizes FCM configuration automatically
• FCM_BUFFER_SIZE
• FCM_PARALLELISM
• Just like a regular row-organized MPP, Data is distributed across database partitions according
to a distribution key (that is used to determine the database partition in which a particular
row of data is stored)
• Each table has its own distribution key defined
• A distribution key can be a single column or group of columns
• The performance of queries that join tables will typically be increased if the join is collocated
DB2 BLU – Extended to MPP …Cont.
RANDOM DISTRIBUTE BY Clause
CREATE TABLE sample(c1 INTEGER NOT NULL, c2 VARCHAR NOT NULL, c3 CHAR(10))
IN TBSP1
ORGANIZE BY COLUMN
DISTRIBUTE BY RANDOM [~Generally HASH]
Basic Steps
Create Standby DB
Configure HADR on Primary and Standby
Start HADR
HADR Restrictions
What are some HADR restrictions?
• Same base OS and same DB2 base software except for a short time during upgrade.
• No DPF
• Same Bit level ( 32 or 64 )
• Direct Access log files are not supported ( RAW )
• HADR does not support infinite logging. ( Not even sure I know what this means )
CFs CFs
Members Members
db2c_db2inst2 55000/tcp
db2c_hadr_db1 56002/tcp # Reserve communication and interrupt ports 56002/56003
db2inst2_monhadr 29000/tcp # db2inst2-HADR F5 listener
DB2_db2inst2 60002/tcp
DB2_db2inst2_1 60003/tcp
DB2_db2inst2_2 60004/tcp
DB2_db2inst2_3 60005/tcp
DB2_db2inst2_4 60006/tcp
DB2_db2inst2_END 60007/tcp
DB2CF_db2inst2 56003/tcp
DB2CF_db2inst2_MGMT 56004/tcp
DB2 pureScale HADR Implementation…Cont.
• Set DB2 HADR configuration parameters on standby
db2c_db2inst1 55000/tcp
db2c_hadr_db1 56002/tcp # Reserve communication and interrupt ports 56002/56003
db2inst1_monhadr 29000/tcp # db2inst1-HADR F5 listener
DB2_db2inst1 60002/tcp
DB2_db2inst1_1 60003/tcp
DB2_db2inst1_2 60004/tcp
DB2_db2inst1_3 60005/tcp
DB2_db2inst1_4 60006/tcp
DB2_db2inst1_END 60007/tcp
DB2CF_db2inst1 56003/tcp
DB2CF_db2inst1_MGMT 56004/tcp
DB2 pureScale HADR Implementation…Cont.
• In V10.5 HADR_SYNCMODE can be ASYNC or SUPERASYNC and DB2 V11.1 supports SYNC and
NEARSYNC
The DB2 takeover command can be run from any Member of the Standby Cluster. It can be run in
FORCED(Failover) and NON-FORCED(Role Switch or Normal) mode requires a active primary DB, but
make sure the database is in PEER state before running the command in NON-FORCE mode. The
“FAILOVER” or FORCED method is normally run with the PRIMARY DB is down.
db2 " TAKEOVER HADR ON DB db1 "
Please note we can’t automate automatic HADR failover using TSA at this moment.
ADMIN_MOVE_TABLE ()
This procedure moves data stored in an existing table to a new table that has the same name
but may have been defined in a different table space.
This procedure also creates a staging table and set of triggers to capture data changes made on the
source table during the move operation. Data is then copied from the source to the target by using
either INSERT FROM CURSOR or LOAD FROM CURSOR.
Once the data has been copied, changes captured in the staging table are replayed against the target
table to bring it up to date. During this phase, the source table is briefly taken offline to rename it. By
default, the source table is then dropped; however, it can be kept and renamed by using the KEEP
option.
ADMIN_MOVE_TABLE () Phases
INIT. This phase initializes all the objects required for the operation, including the staging
table that is necessary for capturing all the data changes during the move.
COPY. This phase creates a copy of the source table according to the current definition
and copies the data into the target table.
REPLAY. This phase replays all the changes captured in the staging table into the target
table just before swapping the source and target tables.
VERIFY. This is an optional phase that checks the table contents between source and target
to make sure they are identical before the swap.
SWAP. This phase performs a swapping of source and target tables. The source table will be
taken offline briefly to complete the REPLAY.
ADMIN_MOVE_TABLE () Phases… Cont.
CLEANUP. This phase drops all the intermediate tables created during the online move
such as the staging table, any non-unique indexes, and triggers.
• REPORT: Calculates a set of values to monitor the progress of a single or multiple table moves. Focus
is the COPY and REPLAY phase of a running table move
Sample Questions: #1
The DBA of company ABC is managing a HADR multiple standby environment having one
primary, one principal standby, and one auxiliary standby. The DBA uses the MON_GET_
HADR table function to monitor the HADR status. What will be the HADR_STATE for the
auxiliary standby database?
The correct answer is B. The HADR auxiliary standby database will always be in the REMOTE_CATCHUP
state irrespective of the HADR_LOG_GAP between the primary and the auxiliary standby. In the following
example, standby member 10.112.0.1 is the auxiliary standby server.
Which isolation level is supported on the HADR active standby read-only database?
A. Read Stability
B. Repeatable Read
C. Uncommitted Read
D. Cursor Stability
E. Currently Committed (CUR_COMMIT)
Answer:
The correct answer is C. The only isolation level that is supported on the Read on Standby HADR
database is Uncommitted Read (UR). Any application requests other than UR will receive an error
SQL1773N reason code 1. DBA can enforce the UR isolation level on the standby by setting the DB2
registry variable DB2_STANDBY_ISO=UR.
When you set the CUR_COMMIT database configuration parameter to ON, all the queries will return the
data’s committed value when the query is submitted.
Sample Questions: #3
Company ABC has an HADR production environment and wants to isolate most of the
expensive read-only SQL operations on the standby by using the read on standby (ROS)
feature. How do you enable the ROS feature in this case?
You can enable the ROS on the HADR standby database by using the DB2 instance level registry variable
DB2_HADR_ROS. The steps involved are:
A. A registry variable that is explicitly set by an application can only be overwritten by an aggregated
registry setting
B. An aggregated registry variable that is explicitly set by an application cannot be overwritten
C. A registry variable that is implicitly configured through an aggregated registry variable can also be
explicitly configured
D. A registry variable that is implicitly configured through an aggregated registry variable takes
precedence over an explicitly configured value
Answer:
The correct answer is C. An aggregate registry variable is a group of several registry variables as a
configuration that is identified by one registry variable name. Each registry variable that is part of the
group has a predefined setting. The purpose of an aggregate registry variable is to ease registry
configuration for broad operational objectives.
You can use an aggregate registry variable to explicitly define any registry variable that is implicitly
configured, which in a way overrides the aggregated registry variable implicit value.
Option A is incorrect—an aggregated registry variable cannot override the registry variable.
Option B is also incorrect—an aggregated registry variable can easily be overwritten by explicitly setting
the value for a registry.
Option D is incorrect as well—the explicit registry setting takes precedence over the implicit aggregated
registry setting.
Sample Questions: #5
A. db2licm
B. db2ls
C. db2greg
D. db2 show install locations
Answer:
The correct answer is C. The command to display the location of the global registry file is db2greg –g.
Option A is incorrect due to the fact that db2licm command is used to work on licenses and option B
db2ls to list the installed DB2 copies. Option D is invalid – no such command in DB2.
Sample Questions: #6
The correct answer is D. When we load data into BLU MPP, each member creates a local histogram which
is then sent to coordinator/build node to create a common global dictionary. The build node will
distribute the common dictionary to all the nodes to exploit a common compression encoding across the
data partitions.
Option A is incorrect, FCM parameters are automatically configured by DB2. Option B is incorrect due to
the fact that both row and column-organized tables data can form a collocation join. Option C.
contradicts option D.
Sample Questions: #7
Tables can be converted from row to column organization by using db2convert utility command. Identify
the characteristics that will not stop conversion.
A. Trigger
B. Foreign Key
C. MQT
D. XML/LOB
Answer:
The correct answer is B. Secondary indexes are dropped and not defined on a column table.
Option A is incorrect, if a trigger is defined, drop it and then convert the table. Option C and D are not
supported in column-organized tables.
Sample Questions: #8
A. REPLAY
B. COPY
C. VERIFY
D. TERM
Answer:
The correct answer is A. REPLAY phase replays all the changes captured in the staging table into the
target table just before swapping the source and target tables. This is very essential to run this phase
multiple times to copy the staging data at frequent intervals to minimize the change data volume.
Sample Questions: #9
Which statement is false regarding HADR auxiliary standby functionality in a multiple standby
environment?
The correct answer is D. The only supported synchronization mode for auxiliary standby is SUPERASYNC.
You can see the supported modes below:
Sample Questions: #10
Which data movement utility is suitable for moving and processing large amounts of real-time data into
the data warehouse without affecting availability?
The correct answer is C. The Ingest utility is a high-speed, client-side, highly configurable, multithreaded
DB2 utility that streams data from files and pipes into DB2 target tables by using SQL-like commands.
Because the Ingest utility can move large amounts of real-time data without locking the target table, you
do not need to choose between the data currency and availability.
LOAD WITH NO ACCESS does not allow users to access the table until it finishes, and SQL and Q
replication can replicate one or more tables between the source and target systems to capture the data
changes; however, these are not designed to move large amount of data at one go because they do call
INSERT/UPDATE/DELETE/MEGRE internally in sequence.
IMPORT WITH COMMITCOUNT AUTOMATIC, even though this option allows other users to access the
target table, it takes very long to process the data load due to sequential, multiple INSERTs threads.
Sample Questions: #11
When a database is created with DB2_WORKLOAD=ANALYTICS setting, what is the default DB CFG value
for the below listed parameters?