DB2 V8 Application Programming and SQL Guide
DB2 V8 Application Programming and SQL Guide
Version 8
Application Programming
and SQL Guide
SC18-7415-00
DB2 Universal Database for z/OS
®
Version 8
Application Programming
and SQL Guide
SC18-7415-00
Note
Before using this information and the product it supports, be sure to read the general information under “Notices” on page
1037.
Contents v
Coding SQL statements in a COBOL application . . . . . . . . . . . . 170
Defining the SQL communication area . . . . . . . . . . . . . . . 170
Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . 171
Embedding SQL statements . . . . . . . . . . . . . . . . . . 172
| Using host variables and host variable arrays . . . . . . . . . . . . 175
Declaring host variables . . . . . . . . . . . . . . . . . . . . 176
| Declaring host variable arrays . . . . . . . . . . . . . . . . . . 183
Using host structures . . . . . . . . . . . . . . . . . . . . . 189
Determining equivalent SQL and COBOL data types . . . . . . . . . 194
Determining compatibility of SQL and COBOL data types . . . . . . . . 198
Using indicator variables and indicator variable arrays . . . . . . . . . 199
Handling SQL error return codes . . . . . . . . . . . . . . . . . 201
Coding considerations for object-oriented extensions in COBOL . . . . . 202
Coding SQL statements in a Fortran application . . . . . . . . . . . . 203
Defining the SQL communication area . . . . . . . . . . . . . . . 203
Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . 204
Embedding SQL statements . . . . . . . . . . . . . . . . . . 204
Using host variables . . . . . . . . . . . . . . . . . . . . . 206
Declaring host variables . . . . . . . . . . . . . . . . . . . . 207
Determining equivalent SQL and Fortran data types . . . . . . . . . . 208
Determining compatibility of SQL and Fortran data types . . . . . . . . 211
Using indicator variables . . . . . . . . . . . . . . . . . . . . 211
Handling SQL error return codes . . . . . . . . . . . . . . . . . 212
Coding SQL statements in a PL/I application . . . . . . . . . . . . . 213
Defining the SQL communication area . . . . . . . . . . . . . . . 213
Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . 214
Embedding SQL statements . . . . . . . . . . . . . . . . . . 214
| Using host variables and host variable arrays . . . . . . . . . . . . 217
Declaring host variables . . . . . . . . . . . . . . . . . . . . 217
| Declaring host variable arrays . . . . . . . . . . . . . . . . . . 220
Using host structures . . . . . . . . . . . . . . . . . . . . . 223
Determining equivalent SQL and PL/I data types . . . . . . . . . . . 224
Determining compatibility of SQL and PL/I data types . . . . . . . . . 228
Using indicator variables and indicator variable arrays . . . . . . . . . 229
Handling SQL error return codes . . . . . . . . . . . . . . . . . 230
Coding considerations for PL/I . . . . . . . . . . . . . . . . . . 232
Coding SQL statements in a REXX application . . . . . . . . . . . . . 232
Defining the SQL communication area . . . . . . . . . . . . . . . 232
Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . 233
Accessing the DB2 REXX Language Support application programming
interfaces . . . . . . . . . . . . . . . . . . . . . . . . 233
Embedding SQL statements in a REXX procedure . . . . . . . . . . 235
Using cursors and statement names . . . . . . . . . . . . . . . 237
Using REXX host variables and data types . . . . . . . . . . . . . 237
Using indicator variables . . . . . . . . . . . . . . . . . . . . 241
Setting the isolation level of SQL statements in a REXX procedure . . . . 241
Contents vii
Defining a user-defined function . . . . . . . . . . . . . . . . . . 296
Components of a user-defined function definition . . . . . . . . . . . 296
Examples of user-defined function definitions . . . . . . . . . . . . 298
Implementing an external user-defined function . . . . . . . . . . . . 300
Writing a user-defined function . . . . . . . . . . . . . . . . . 300
Preparing a user-defined function for execution . . . . . . . . . . . 333
Testing a user-defined function . . . . . . . . . . . . . . . . . 335
Implementing an SQL scalar function . . . . . . . . . . . . . . . . 338
Invoking a user-defined function . . . . . . . . . . . . . . . . . . 338
Syntax for user-defined function invocation . . . . . . . . . . . . . 338
Ensuring that DB2 executes the intended user-defined function . . . . . 339
Casting of user-defined function arguments . . . . . . . . . . . . . 345
What happens when a user-defined function abnormally terminates . . . . 346
Nesting SQL Statements . . . . . . . . . . . . . . . . . . . . 346
Recommendations for user-defined function invocation . . . . . . . . . 347
Contents ix
Step 2: Compile (or assemble) and link-edit the application . . . . . . . 471
Step 3: Bind the application . . . . . . . . . . . . . . . . . . . 472
Step 4: Run the application . . . . . . . . . . . . . . . . . . . 485
Using JCL procedures to prepare applications . . . . . . . . . . . . . 489
Available JCL procedures . . . . . . . . . . . . . . . . . . . 489
Including code from SYSLIB data sets . . . . . . . . . . . . . . . 490
Starting the precompiler dynamically . . . . . . . . . . . . . . . 491
An alternative method for preparing a CICS program . . . . . . . . . 493
Using JCL to prepare a program with object-oriented extensions . . . . . 495
Using ISPF and DB2 Interactive (DB2I) . . . . . . . . . . . . . . . 495
DB2I help . . . . . . . . . . . . . . . . . . . . . . . . . 495
DB2I Primary Option Menu . . . . . . . . . . . . . . . . . . . 495
Contents xi
Declaring and using variables in an SQL procedure . . . . . . . . . . 601
Parameter style for an SQL procedure . . . . . . . . . . . . . . . 602
Terminating statements in an SQL procedure . . . . . . . . . . . . 602
Handling SQL conditions in an SQL procedure . . . . . . . . . . . . 603
Examples of SQL procedures . . . . . . . . . . . . . . . . . . 607
Preparing an SQL procedure . . . . . . . . . . . . . . . . . . 609
Writing and preparing an application to use stored procedures . . . . . . . 621
Forms of the CALL statement . . . . . . . . . . . . . . . . . . 621
Authorization for executing stored procedures . . . . . . . . . . . . 623
Linkage conventions . . . . . . . . . . . . . . . . . . . . . 623
Using indicator variables to speed processing . . . . . . . . . . . . 643
Declaring data types for passed parameters . . . . . . . . . . . . . 643
Writing a DB2 UDB for z/OS client program or SQL procedure to receive
result sets . . . . . . . . . . . . . . . . . . . . . . . . 648
Accessing transition tables in a stored procedure . . . . . . . . . . . 654
Calling a stored procedure from a REXX Procedure . . . . . . . . . . 654
Preparing a client program . . . . . . . . . . . . . . . . . . . 658
Running a stored procedure . . . . . . . . . . . . . . . . . . . 659
How DB2 determines which version of a stored procedure to run . . . . . 660
Using a single application program to call different versions of a stored
procedure . . . . . . . . . . . . . . . . . . . . . . . . 660
Running multiple stored procedures concurrently . . . . . . . . . . . 661
| Running multiple instances of a stored procedure concurrently . . . . . . 662
Accessing non-DB2 resources . . . . . . . . . . . . . . . . . . 663
Testing a stored procedure . . . . . . . . . . . . . . . . . . . . 664
Debugging the stored procedure as a stand-alone program on a workstation 664
Debugging with the Debug Tool and IBM VisualAge COBOL . . . . . . . 665
Debugging an SQL procedure or C language stored procedure with the
Debug Tool and C/C++ Productivity Tools for z/OS . . . . . . . . . 665
Debugging with Debug Tool for z/OS interactively and in batch mode . . . 666
Using the MSGFILE run-time option . . . . . . . . . . . . . . . . 668
Using driver applications . . . . . . . . . . . . . . . . . . . . 668
Using SQL INSERT statements . . . . . . . . . . . . . . . . . 669
Contents xiii
| Dynamic prefetch (PREFETCH=D) . . . . . . . . . . . . . . . . 768
List prefetch (PREFETCH=L) . . . . . . . . . . . . . . . . . . 768
Sequential detection at execution time . . . . . . . . . . . . . . . 769
Determining sort activity . . . . . . . . . . . . . . . . . . . . . 771
Sorts of data . . . . . . . . . . . . . . . . . . . . . . . . 771
Sorts of RIDs . . . . . . . . . . . . . . . . . . . . . . . . 772
The effect of sorts on OPEN CURSOR . . . . . . . . . . . . . . 772
Processing for views and nested table expressions . . . . . . . . . . . 773
Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
Materialization . . . . . . . . . . . . . . . . . . . . . . . . 774
Using EXPLAIN to determine when materialization occurs . . . . . . . 776
Using EXPLAIN to determine UNION activity and query rewrite . . . . . 777
Performance of merge versus materialization . . . . . . . . . . . . 778
Estimating a statement’s cost . . . . . . . . . . . . . . . . . . . 779
Creating a statement table . . . . . . . . . . . . . . . . . . . 780
Populating and maintaining a statement table . . . . . . . . . . . . 782
Retrieving rows from a statement table . . . . . . . . . . . . . . 782
Understanding the implications of cost categories . . . . . . . . . . . 782
Chapter 30. Programming for the call attachment facility (CAF) . . . . . 799
Call attachment facility capabilities and restrictions . . . . . . . . . . . 799
Capabilities when using CAF . . . . . . . . . . . . . . . . . . 799
CAF requirements . . . . . . . . . . . . . . . . . . . . . . 800
How to use CAF . . . . . . . . . . . . . . . . . . . . . . . . 802
Summary of connection functions . . . . . . . . . . . . . . . . 804
Accessing the CAF language interface . . . . . . . . . . . . . . . 805
General properties of CAF connections . . . . . . . . . . . . . . 806
CAF function descriptions . . . . . . . . . . . . . . . . . . . 807
CONNECT: Syntax and usage . . . . . . . . . . . . . . . . . . 809
OPEN: Syntax and usage . . . . . . . . . . . . . . . . . . . 813
CLOSE: Syntax and usage . . . . . . . . . . . . . . . . . . . 815
DISCONNECT: Syntax and usage . . . . . . . . . . . . . . . . 816
TRANSLATE: Syntax and usage . . . . . . . . . . . . . . . . . 818
Summary of CAF behavior . . . . . . . . . . . . . . . . . . . 819
Sample scenarios . . . . . . . . . . . . . . . . . . . . . . . 820
A single task with implicit connections . . . . . . . . . . . . . . . 820
A single task with explicit connections . . . . . . . . . . . . . . . 821
Several tasks . . . . . . . . . . . . . . . . . . . . . . . . 821
Exit routines from your application . . . . . . . . . . . . . . . . . 821
Contents xv
| Application-to-application connectivity . . . . . . . . . . . . . . . 882
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
Programming interface information . . . . . . . . . . . . . . . . . 1038
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . 1039
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . 1041
Contents xvii
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . 1075
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
.
Important
In this version of DB2 UDB for z/OS, the DB2 Utilities Suite is available as an
optional product. You must separately order and purchase a license to such
utilities, and discussion of those utility functions in this publication is not
intended to otherwise imply that you have a license to them. See Part 1 of
DB2 Utility Guide and Reference for packaging details.
Visit the following Web site for information about ordering DB2 books and obtaining
other valuable information about DB2 UDB for z/OS:
www.ibm.com/software/data/db2/zos/library.html
When referring to a DB2 product other than DB2 UDB for z/OS, this information
uses the product’s full name to avoid ambiguity.
required_item
required_item
optional_item
If an optional item appears above the main path, that item has no effect on the
execution of the statement and is used only for readability.
optional_item
required_item
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path and the
remaining choices are shown below.
default_choice
required_item
optional_choice
optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
v Keywords appear in uppercase (for example, FROM). They must be spelled exactly
as shown. Variables appear in all lowercase letters (for example, column-name).
They represent user-supplied names or values.
v If punctuation marks, parentheses, arithmetic operators, or other such symbols
are shown, you must enter them as part of the syntax.
Accessibility
Accessibility features help a user who has a physical disability, such as restricted
mobility or limited vision, to use software products. The major accessibility features
in z/OS products, including DB2 UDB for z/OS, enable users to:
v Use assistive technologies such as screen reader and screen magnifier software
v Operate specific or equivalent features by using only a keyboard
v Customize display attributes such as color, contrast, and font size
Assistive technology products, such as screen readers, function with the DB2 UDB
for z/OS user interfaces. Consult the documentation for the assistive technology
products for specific information when you use assistive technology to access these
interfaces.
Online documentation for Version 8 of DB2 UDB for z/OS is available in the DB2
Information Center, which is an accessible format when used with assistive
technologies such as screen reader or screen magnifier software. The DB2
Information Center for z/OS solutions is available at the following Web site:
http://publib.boulder.ibm.com/infocenter/db2zhelp.
www.ibm.com/software/db2zos/library.html
This Web site has a feedback page that you can use to send comments.
For more advanced topics on using SELECT statements, see Chapter 4, “Using
subqueries,” on page 49, and Chapter 20, “Planning to access distributed data,” on
page 423.
Examples of SQL statements illustrate the concepts that this chapter discusses.
Consider developing SQL statements similar to these examples and then running
them dynamically using SPUFI or DB2 Query Management Facility (DB2 QMF).
Result tables
The data retrieved through SQL is always in the form of a table, which is called a
result table. Like the tables from which you retrieve the data, a result table has rows
and columns. A program fetches this data one row at a time.
Example: SELECT statement: The following SELECT statement retrieves the last
name, first name, and phone number of employees in department D11 from the
sample employee table:
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8810.EMP
WHERE WORKDEPT = ’D11’
ORDER BY LASTNAME;
Data types
When you create a DB2 table, you define each column to have a specific data type.
The data type can be a built-in data type or a distinct type. This section discusses
built-in data types. For information about distinct types, see Chapter 16, “Creating
and using distinct types,” on page 349. The data type of a column determines what
you can and cannot do with the column. When you perform operations on columns,
the data must be compatible with the data type of the referenced column. For
example, you cannot insert character data, like a last name, into a column whose
data type is numeric. Similarly, you cannot compare columns containing
incompatible data types.
To better understand the concepts that are presented in this chapter, you must
understand the data types of the columns to which an example refers. As shown in
© Copyright IBM Corp. 1983, 2004 3
Figure 1, built-in data types have four general categories: datetime, string, numeric,
and row identifier (ROWID).
For more detailed information about each data type, see Chapter 2 of DB2 SQL
Reference.
Table 1 on page 5 shows whether operands of any two data types are compatible,
Y (Yes), or incompatible, N (No). Numbers in the table, either as superscript of Y or
N, or as a value in the column, indicates a note at the bottom of the table.
Example: SELECT *: The following SQL statement selects all columns from the
department table:
SELECT *
FROM DSN8810.DEPT;
Because the example does not specify a WHERE clause, the statement retrieves
data from all rows.
The dashes for MGRNO and LOCATION in the result table indicate null values.
SELECT * is recommended mostly for use with dynamic SQL and view definitions.
You can use SELECT * in static SQL, but this is not recommended; if you add a
column to the table to which SELECT * refers, the program might reference
columns for which you have not defined receiving host variables. For more
information about host variables, see “Accessing data using host variables, variable
arrays, and structures” on page 71.
If you list the column names in a static SELECT statement instead of using an
asterisk, you can avoid the problem created by using SELECT *. You can also see
the relationship between the receiving host variables and the columns in the result
table.
Example: SELECT column-name: The following SQL statement selects only the
MGRNO and DEPTNO columns from the department table:
SELECT MGRNO, DEPTNO
FROM DSN8810.DEPT;
To order the rows in a result table by the values in a derived column, specify a
name for the column by using the AS clause, and specify that name in the ORDER
BY clause. For information about using the ORDER BY clause, see “Putting the
rows in order: ORDER BY” on page 9.
Example: CREATE VIEW with AS clause: You can specify result column names in
the select-clause of a CREATE VIEW statement. You do not need to supply the
| For more information about using the CREATE VIEW statement, see “Defining a
| view: CREATE VIEW” on page 25.
Example: UNION ALL with AS clause: You can use the AS clause to give the
same name to corresponding columns of tables in a union. The third result column
from the union of the two tables has the name TOTAL_VALUE, even though it
contains data derived from columns with different names:
SELECT ’On hand’ AS STATUS, PARTNO, QOH * COST AS TOTAL_VALUE
FROM PART_ON_HAND
UNION ALL
SELECT ’Ordered’ AS STATUS, PARTNO, QORDER * COST AS TOTAL_VALUE
FROM ORDER_PART
ORDER BY PARTNO, TOTAL_VALUE;
The column STATUS and the derived column TOTAL_VALUE have the same name
in the first and second result tables, and are combined in the union of the two result
tables, which is similar to the following partial output:
STATUS PARTNO TOTAL_VALUE
======= ====== ===========
On hand 00557 345.60
Ordered 00557 150.50
.
.
.
For information about unions, see “Merging lists of values: UNION” on page 13.
Example: GROUP BY derived column: You can use the AS clause in a FROM
clause to assign a name to a derived column that you want to refer to in a GROUP
BY clause. This SQL statement names HIREYEAR in the nested table expression,
which lets you use the name of that result column in the GROUP BY clause:
SELECT HIREYEAR, AVG(SALARY)
FROM (SELECT YEAR(HIREDATE) AS HIREYEAR, SALARY
FROM DSN8810.EMP) AS NEWEMP
GROUP BY HIREYEAR;
You cannot use GROUP BY with a name that is defined with an AS clause for the
derived column YEAR(HIREDATE) in the outer SELECT, because that name does
not exist when the GROUP BY runs. However, you can use GROUP BY with a
name that is defined with an AS clause in the nested table expression, because the
| nested table expression runs before the GROUP BY that references the name. For
| more information about using the GROUP BY clause, see “Summarizing group
| values: GROUP BY” on page 11.
DB2 evaluates a predicate for each row as true, false, or unknown. Results are
unknown only if an operand is null.
Table 2 lists the type of comparison, the comparison operators, and an example of
how each type of comparison that you can use in a predicate in a WHERE clause.
Table 2. Comparison operators used in conditions
Type of comparison Comparison operator Example
Equal to = DEPTNO = ’X01’
Not equal to <> DEPTNO <> ’X01’
Less than < AVG(SALARY) < 30000
Less than or equal to <= AGE <= 25
Not less than >= AGE >= 21
Greater than > SALARY > 2000
Greater than or equal to >= SALARY >= 5000
Not greater than <= SALARY <= 5000
Equal to null IS NULL PHONENO IS NULL
| Not equal to or one IS DISTINCT FROM PHONENO IS DISTINCT FROM
| value is equal to null :PHONEHV
Similar to another value LIKE NAME LIKE ’%SMITH%’ or STATUS
LIKE ’N_’
At least one of two OR HIREDATE < ’1965-01-01’ OR SALARY
conditions < 16000
Both of two conditions AND HIREDATE < ’1965-01-01’ AND
SALARY < 16000
Between two values BETWEEN SALARY BETWEEN 20000 AND 40000
Equals a value in a set IN (X, Y, Z) DEPTNO IN (’B01’, ’C01’, ’D01’)
Note: SALARY BETWEEN 20000 AND 40000 is equivalent to SALARY >= 20000 AND
SALARY <= 40000. For more information about predicates, see Chapter 2 of DB2 SQL
Reference.
You can also search for rows that do not satisfy one of the preceding conditions by
using the NOT keyword before the specified condition.
| You can search for rows that do not satisfy the IS DISTINCT FROM predicate by
| using either of the following predicates:
| v value IS NOT DISTINCT FROM value
| v NOT(value IS DISTINCT FROM value)
| Both of these forms of the predicate create an expression where one value is equal
| to another value or both values are equal to null.
You can list the rows in ascending or descending order. Null values appear last in
an ascending sort and first in a descending sort.
DB2 sorts strings in the collating sequence associated with the encoding scheme of
the table. DB2 sorts numbers algebraically and sorts datetime values
chronologically.
Example: ORDER BY clause with a column name as the sort key: Retrieve the
employee numbers, last names, and hire dates of employees in department A00 in
ascending order of hire dates:
SELECT EMPNO, LASTNAME, HIREDATE
FROM DSN8810.EMP
WHERE WORKDEPT = ’A00’
ORDER BY HIREDATE ASC;
Example: ORDER BY clause with an expression as the sort key: The following
subselect retrieves the employee numbers, salaries, commissions, and total
compensation (salary plus commission) for employees with a total compensation
greater than 40000. Order the results by total compensation:
SELECT EMPNO, SALARY, COMM, SALARY+COMM AS "TOTAL COMP"
FROM DSN8810.EMP
WHERE SALARY+COMM > 40000
ORDER BY SALARY+COMM;
Example: ORDER BY clause using a derived column name: The following SQL
statement orders the selected information by total salary:
SELECT EMPNO, (SALARY + BONUS + COMM) AS TOTAL_SAL
FROM DSN8810.EMP
ORDER BY TOTAL_SAL;
Except for the columns that are named in the GROUP BY clause, the SELECT
statement must specify any other selected columns as an operand of one of the
aggregate functions.
Example: GROUP BY clause using one column: The following SQL statement
lists, for each department, the lowest and highest education level within that
department:
SELECT WORKDEPT, MIN(EDLEVEL), MAX(EDLEVEL)
FROM DSN8810.EMP
GROUP BY WORKDEPT;
If a column that you specify in the GROUP BY clause contains null values, DB2
considers those null values to be equal. Thus, all nulls form a single group.
When it is used, the GROUP BY clause follows the FROM clause and any WHERE
clause, and precedes the ORDER BY clause.
You can group the rows by the values of more than one column.
Example: GROUP BY clause using more than one column: The following
statement finds the average salary for men and women in departments A00 and
C01:
SELECT WORKDEPT, SEX, AVG(SALARY) AS AVG_SALARY
FROM DSN8810.EMP
WHERE WORKDEPT IN (’A00’, ’C01’)
GROUP BY WORKDEPT, SEX;
DB2 groups the rows first by department number and then (within each department)
by sex before it derives the average SALARY value for each group.
Compare the preceding example with the second example shown in “Summarizing
group values: GROUP BY” on page 11. The clause, HAVING COUNT(*) > 1, ensures
that only departments with more than one member are displayed. In this case,
departments B01 and E01 do not display because the HAVING clause tests a
property of the group.
Example: HAVING clause used with a GROUP BY clause: Use the HAVING
clause to retrieve the average salary and minimum education level of women in
each department for which all female employees have an education level greater
than or equal to 16. Assuming you only want results from departments A00 and
D11, the following SQL statement tests the group property, MIN(EDLEVEL):
SELECT WORKDEPT, AVG(SALARY) AS AVG_SALARY,
MIN(EDLEVEL) AS MIN_EDLEVEL
FROM DSN8810.EMP
WHERE SEX = ’F’ AND WORKDEPT IN (’A00’, ’D11’)
GROUP BY WORKDEPT
HAVING MIN(EDLEVEL) >= 16;
When you specify both GROUP BY and HAVING, the HAVING clause must follow
the GROUP BY clause. A function in a HAVING clause can include DISTINCT if you
have not used DISTINCT anywhere else in the same SELECT statement. You can
also connect multiple predicates in a HAVING clause with AND and OR, and you
can use NOT for any predicate of a search condition.
When you use the UNION statement, the SQLNAME field of the SQLDA contains
the column names of the first operand.
Example: UNION clause: You can obtain a combined list of employee numbers
that includes both of the following:
v People in department D11
v People whose assignments include projects MA2112, MA2113, and AD3111.
The following SQL statement gives a combined result table containing employee
numbers in ascending order with no duplicates listed:
SELECT EMPNO
FROM DSN8810.EMP
WHERE WORKDEPT = ’D11’
UNION
SELECT EMPNO
FROM DSN8810.EMPPROJACT
WHERE PROJNO = ’MA2112’ OR
PROJNO = ’MA2113’ OR
PROJNO = ’AD3111’
ORDER BY EMPNO;
If you have an ORDER BY clause, it must appear after the last SELECT statement
that is part of the union. In this example, the first column of the final result table
determines the final order of the rows.
Example: UNION ALL clause: The following SQL statement gives a combined
result table containing employee numbers in ascending order, and includes
duplicate numbers:
SELECT EMPNO
FROM DSN8810.EMP
WHERE WORKDEPT = ’D11’
UNION ALL
SELECT EMPNO
FROM DSN8810.EMPPROJACT
WHERE PROJNO = ’MA2112’ OR
PROJNO = ’MA2113’ OR
PROJNO = ’AD3111’
ORDER BY EMPNO;
| Each common table expression must have a unique name and be defined only
| once. However, you can reference a common table expression many times in the
| same SQL statement. Unlike regular views or nested table expressions, which
| derive their result tables for each reference, all references to common table
| expressions in a given statement share the same result table.
| You can use a common table expression in a SELECT statement by using the
| WITH clause at the beginning of the statement.
| Example: WITH clause in a SELECT statement: The following statement finds the
| department with the highest total pay. The query involves two levels of aggregation.
| First, you need to determine the total pay for each department by using the SUM
| function and order the results by using the GROUP BY clause. You then need to
| find the department with maximum total pay based on the total pay for each
| department.
| WITH DTOTAL (deptno, totalpay) AS
| (SELECT deptno, sum(salary+bonus)
| FROM DSN8810.EMP
| GROUP BY deptno)
| SELECT deptno
| FROM DTOTAL
| WHERE totalpay = (SELECT max(totalpay)
| FROM DTOTAL);
| The result table for the common table expression, DTOTAL, contains the
| department number and total pay for each department in the employee table. The
| fullselect in the previous example uses the result table for DTOTAL to find the
| department with the highest total pay. The result table for the entire statement looks
| similar to the following results:
| DEPTNO
| ======
| D11
| The fullselect in the previous example uses the result table for DTOTAL to find the
| departments that have a greater than average total pay. The result table is saved as
| the RICH_DEPT view and looks similar to the following results:
| DEPTNO
| ======
| A00
| D11
| D21
| The fullselect in the previous example uses the result table for VITALDEPT to find
| the manager’s number for departments that have a greater than average number of
| senior engineers. The manager’s number is then inserted into the vital_mgr table.
| See Appendix E, “Recursive common table expression examples,” on page 997 for
| examples of bill of materials applications that use recursive common table
| expressions.
Avoiding decimal arithmetic errors: For static SQL statements, the simplest way
to avoid a division error is to override DEC31 rules by specifying the precompiler
option DEC(15). In some cases you can avoid a division error by specifying D31.s.
This specification reduces the probability of errors for statements that are
embedded in the program. s is a number between one and nine and represents the
minimum scale to be used for division operations.
If the dynamic SQL statements have bind, define, or invoke behavior and the value
| of the installation option for USE FOR DYNAMICRULES on panel DSNTIP4 is NO,
you can use the precompiler option DEC(15), DEC15, or D15.s to override DEC31
rules.
For a dynamic statement, or for a single static statement, use the scalar function
DECIMAL to specify values of the precision and scale for a result that causes no
errors.
Before you execute a dynamic statement, set the value of special register
CURRENT PRECISION to DEC15 or D15.s.
Even if you use DEC31 rules, multiplication operations can sometimes cause
overflow because the precision of the product is greater than 31. To avoid overflow
from multiplication of large numbers, use the MULTIPLY_ALT built-in function
instead of the multiplication operator.
The contents of the DB2 system catalog tables can be a useful reference tool when
you begin to develop an SQL statement or an application program.
If your DB2 subsystem uses an exit routine for access control authorization, you
cannot rely on catalog queries to tell you the tables you can access. When such an
exit routine is installed, both RACF and DB2 control table access.
If you display column information about a table that includes LOB or ROWID
columns, the LENGTH field for those columns contains the number of bytes those
column occupy in the base table, rather than the length of the LOB or ROWID data.
To determine the maximum length of data for a LOB or ROWID column, include the
LENGTH2 column in your query, as in the following example:
SELECT NAME, COLTYPE, LENGTH, LENGTH2
FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME = ’EMP_PHOTO_RESUME’
AND TBCREATOR = ’DSN8810’;
You must separate each column description from the next with a comma, and
enclose the entire list of column descriptions in parentheses.
Identifying defaults
If you want to constrain the input or identify the default of a column, you can use
the following values:
v NOT NULL, when the column cannot contain null values.
Each example shown in this chapter assumes that you logged on using your own
authorization ID. The authorization ID qualifies the name of each object you create.
For example, if your authorization ID is SMITH, and you create table YDEPT, the
name of the table is SMITH.YDEPT. If you want to access table DSN8810.DEPT,
you must refer to it by its complete name. If you want to access your own table
YDEPT, you need only to refer to it as YDEPT.
If you want DEPTNO to be a primary key, as in the sample table, explicitly define
the key. Use an ALTER TABLE statement, as in the following example:
ALTER TABLE YDEPT
PRIMARY KEY(DEPTNO);
You can use an INSERT statement to copy the rows of the result table of a
fullselect from one table to another. The following statement copies all of the rows
from DSN8810.DEPT to your own YDEPT work table.
INSERT INTO YDEPT
SELECT *
FROM DSN8810.DEPT;
This statement also creates a referential constraint between the foreign key in
YEMP (WORKDEPT) and the primary key in YDEPT (DEPTNO). It also restricts all
phone numbers to unique numbers.
If you want to change a table definition after you create it, use the statement ALTER
TABLE. If you want to change a table name after you create it, use the statement
RENAME TABLE.
| You can change a table definition by using the ALTER TABLE statement only in
| certain ways. For example, you can add and drop constraints on columns in a table.
| You can also change the data type of a column within character data types, within
| numeric data types, and within graphic data types. You can add a column to a
| table. However, you cannot drop a column from a table.
| For more information about changing a table definition by using ALTER TABLE, see
| Part 2 (Volume 1) of DB2 Administration Guide. For other details about the ALTER
| TABLE and RENAME TABLE statements, see Chapter 5 of DB2 SQL Reference.
Temporary tables are especially useful when you need to sort or query intermediate
result tables that contain a large number of rows, but you want to store only a small
subset of those rows permanently.
Example: You can also create this same definition by copying the definition of a
base table using the LIKE clause:
CREATE GLOBAL TEMPORARY TABLE TEMPPROD LIKE PROD;
The SQL statements in the previous examples create identical definitions, even
though table PROD contains two columns, DESCRIPTION and CURDATE, that are
defined as NOT NULL WITH DEFAULT. Unlike the PROD sample table, the
DESCRIPTION and CURDATE columns in the TEMPPROD table are defined as
NOT NULL and do not have defaults, because created temporary tables do not
support non-null default values.
After you run one of the two CREATE statements, the definition of TEMPPROD
exists, but no instances of the table exist. To drop the definition of TEMPPROD, you
must run the following statement:
DROP TABLE TEMPPROD;
An instance of a created temporary table exists at the current server until one of the
following actions occurs:
v The application process ends.
v The remote server connection through which the instance was created
terminates.
v The unit of work in which the instance was created completes.
When you run a ROLLBACK statement, DB2 deletes the instance of the created
temporary table. When you run a COMMIT statement, DB2 deletes the instance
of the created temporary table unless a cursor for accessing the created
temporary table is defined WITH HOLD and is open.
When you run the INSERT statement, DB2 creates an instance of TEMPPROD and
populates that instance with rows from table PROD. When the COMMIT statement
is run, DB2 deletes all rows from TEMPPROD. However, assume that you change
the declaration of cursor C1 to the following declaration:
EXEC SQL DECLARE C1 CURSOR WITH HOLD
FOR SELECT * FROM TEMPPROD;
In this case, DB2 does not delete the contents of TEMPPROD until the application
ends because C1, a cursor defined WITH HOLD, is open when the COMMIT
statement is run. In either case, DB2 drops the instance of TEMPPROD when the
application ends.
Before you can define declared temporary tables, you must create a special
database and table spaces for them. You do that by running the CREATE
DATABASE statement with the AS TEMP clause, and then creating segmented
table spaces in that database. A DB2 subsystem can have only one database for
declared temporary tables, but that database can contain more than one table
| space. There must be at least one table space with a 8-KB page size in the TEMP
| database to declare a temporary table.
Example: The following statements create a database and table space for declared
temporary tables:
CREATE DATABASE DTTDB AS TEMP;
CREATE TABLESPACE DTTTS IN DTTDB
SEGSIZE 4;
You can define a declared temporary table in any of the following ways:
v Specify all the columns in the table.
Unlike columns of created temporary tables, columns of declared temporary
tables can include the WITH DEFAULT clause.
v Use a LIKE clause to copy the definition of a base table, created temporary
table, or view.
If the base table or created temporary table that you copy has identity columns,
you can specify that the corresponding columns in the declared temporary table
are also identity columns. Do that by specifying the INCLUDING IDENTITY
COLUMN ATTRIBUTES clause when you define the declared temporary table.
v Use a fullselect to choose specific columns from a base table, created temporary
table, or view.
After you run a DECLARE GLOBAL TEMPORARY TABLE statement, the definition
of the declared temporary table exists as long as the application process runs. If
you need to delete the definition before the application process completes, you can
do that with the DROP TABLE statement. For example, to drop the definition of
TEMPPROD, run the following statement:
DROP TABLE SESSION.TEMPPROD;
DB2 creates an empty instance of a declared temporary table when it runs the
DECLARE GLOBAL TEMPORARY TABLE statement. You can populate the
declared temporary table using INSERT statements, modify the table using
searched or positioned UPDATE or DELETE statements, and query the table using
SELECT statements. You can also create indexes on the declared temporary table.
The ON COMMIT clause that you specify in the DECLARE GLOBAL TEMPORARY
TABLE statement determines whether DB2 keeps or deletes all the rows from the
table when you run a COMMIT statement in an application with a declared
temporary table. ON COMMIT DELETE ROWS, which is the default, causes all
Example: Suppose that you run the following statement in an application program:
EXEC SQL DECLARE GLOBAL TEMPORARY TABLE TEMPPROD
AS (SELECT * FROM BASEPROD)
DEFINITION ONLY
INCLUDING IDENTITY COLUMN ATTRIBUTES
INCLUDING COLUMN DEFAULTS
ON COMMIT PRESERVE ROWS;
EXEC
. SQL INSERT INTO SESSION.TEMPPROD SELECT * FROM BASEPROD;
.
.
EXEC
. SQL COMMIT;
.
.
Use the DROP TABLE statement with care: Dropping a table is NOT equivalent
to deleting all its rows. When you drop a table, you lose more than its data and its
definition. You lose all synonyms, views, indexes, and referential and check
constraints associated with that table. You also lose all authorities granted on the
table.
For more information about the DROP statement, see Chapter 5 of DB2 SQL
Reference.
Use the CREATE VIEW statement to define a view and give the view a name, just
as you do for a table. The view created with the following statement shows each
department manager’s name with the department data in the DSN8810.DEPT table.
CREATE VIEW VDEPTM AS
SELECT DEPTNO, MGRNO, LASTNAME, ADMRDEPT
FROM DSN8810.DEPT, DSN8810.EMP
WHERE DSN8810.EMP.EMPNO = DSN8810.DEPT.MGRNO;
When you create a view, you can reference the USER and CURRENT SQLID
special registers in the CREATE VIEW statement. When referencing the view, DB2
uses the value of the USER or CURRENT SQLID that belongs to the user of the
SQL statement (SELECT, UPDATE, INSERT, or DELETE) rather than the creator of
the view. In other words, a reference to a special register in a view definition refers
to its run-time value.
You can use views to limit access to certain kinds of data, such as salary
information. You can also use views for the following actions:
v Make a subset of a table’s data available to an application. For example, a view
based on the employee table might contain rows only for a particular department.
v Combine columns from two or more tables and make the combined data
available to an application. By using a SELECT statement that matches values in
one table with those in another table, you can create a view that presents data
from both tables. However, you can only select data from this type of view. You
cannot update, delete, or insert data using a view that joins two or more
tables.
v Combine rows from two or more tables and make the combined data available to
an application. By using two or more subselects that are connected by UNION or
UNION ALL operators, you can create a view that presents data from several
tables. However, you can only select data from this type of view. You cannot
update, delete, or insert data using a view that contains UNION operations.
v Present computed data, and make the resulting data available to an application.
You can compute such data using any function or operation that you can use in a
SELECT statement.
In each case, for every row you insert, you must provide a value for any column
that does not have a default value. For a column that meets one of the following
conditions, you can specify DEFAULT to tell DB2 to insert the default value for that
column:
v Is nullable.
v Is defined with a default value.
v Has data type ROWID. ROWID columns always have default values.
v Is an identity column. Identity columns always have default values.
The values that you can insert into a ROWID column or an identity column depend
on whether the column is defined with GENERATED ALWAYS or GENERATED BY
DEFAULT. See “Inserting data into a ROWID column” on page 30 and “Inserting
data into an identity column” on page 30 for more information.
Recommendation: For static INSERT statements, name all of the columns for
which you are providing values for because of the following reasons:
v Your INSERT statement is independent of the table format. (For example, you do
not need to change the statement when a column is added to the table.)
v You can verify that you are giving the values in order.
v Your source statements are more self-descriptive.
If you do not name the columns in a static INSERT statement, and a column is
added to the table, an error can occur if the INSERT statement is rebound. An error
will occur after any rebind of the INSERT statement unless you change the INSERT
statement to include a value for the new column. This is true even if the new
column has a default value.
When you list the column names, you must specify their corresponding values in
the same order as in the list of column names.
Example: The following statement inserts information about a new department into
the YDEPT table.
INSERT INTO YDEPT (DEPTNO, DEPTNAME, MGRNO, ADMRDEPT, LOCATION)
VALUES (’E31’, ’DOCUMENTATION’, ’000010’, ’E01’, ’ ’);
After inserting a new department row into your YDEPT table, you can use a
SELECT statement to see what you have loaded into the table. The following SQL
statement shows you all the new department rows that you have inserted:
SELECT *
FROM YDEPT
WHERE DEPTNO LIKE ’E%’
ORDER BY DEPTNO;
Example: The following statement inserts information about a new employee into
the YEMP table. Because YEMP has a foreign key, WORKDEPT, referencing the
primary key, DEPTNO, in YDEPT, the value inserted for WORKDEPT (E31) must be
a value of DEPTNO in YDEPT or null.
INSERT INTO YEMP
VALUES (’000400’, ’RUTHERFORD’, ’B’, ’HAYES’, ’E31’, ’5678’, ’1983-01-01’,
’MANAGER’, 16, ’M’, ’1943-07-10’, 24000, 500, 1900);
Example: The following statement also inserts a row into the YEMP table. Because
the unspecified columns allow nulls, DB2 inserts null values into the columns that
you do not specify. Because YEMP has a foreign key, WORKDEPT, referencing the
primary key, DEPTNO, in YDEPT, the value inserted for WORKDEPT (D11) must be
a value of DEPTNO in YDEPT or null.
The following statement copies data from DSN8810.EMP into the newly created
table:
INSERT INTO TELE
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8810.EMP
WHERE WORKDEPT = ’D21’;
The two previous statements create and fill a table, TELE, that looks similar to the
following table:
NAME2 NAME1 PHONE
=============== ============ =====
PULASKI EVA 7831
JEFFERSON JAMES 2094
MARINO SALVATORE 3780
SMITH DANIEL 0961
JOHNSON SYBIL 8953
PEREZ MARIA 9001
MONTEVERDE ROBERT 3780
The CREATE TABLE statement example creates a table which, at first, is empty.
The table has columns for last names, first names, and phone numbers, but does
not have any rows.
The INSERT statement fills the newly created table with data selected from the
DSN8810.EMP table: the names and phone numbers of employees in department
D21.
Before you insert data into a ROWID column, you must know how the ROWID
column is defined. ROWID columns can be defined as GENERATED ALWAYS or
GENERATED BY DEFAULT. GENERATED ALWAYS means that DB2 generates a
value for the column, and you cannot insert data into that column. If the column is
defined as GENERATED BY DEFAULT, you can insert a value, and DB2 provides a
default value if you do not supply one.
Example: Suppose that tables T1 and T2 have two columns: an integer column and
a ROWID column. For the following statement to run successfully, ROWIDCOL2
must be defined as GENERATED BY DEFAULT.
INSERT INTO T2 (INTCOL2,ROWIDCOL2)
SELECT * FROM T1;
Before you insert data into an identity column, you must know how the column is
defined. Identity columns are defined with the GENERATED ALWAYS or
GENERATED BY DEFAULT clause. GENERATED ALWAYS means that DB2
generates a value for the column, and you cannot insert data into that column. If
Example: Suppose that tables T1 and T2 have two columns: a character column
and an integer column that is defined as an identity column. For the following
statement to run successfully, IDENTCOL2 must be defined as GENERATED BY
DEFAULT.
INSERT INTO T2 (CHARCOL2,IDENTCOL2)
SELECT * FROM T1;
| Example: In addition to examples that use the DB2 sample tables, the examples in
| this section use an EMPSAMP table that has the following definition:
| CREATE TABLE EMPSAMP
| (EMPNO INTEGER GENERATED ALWAYS AS IDENTITY,
| NAME CHAR(30),
| SALARY DECIMAL(10,2),
| DEPTNO SMALLINT,
| LEVEL CHAR(30),
| HIRETYPE VARCHAR(30) NOT NULL WITH DEFAULT ’New Hire’,
| HIREDATE DATE NOT NULL WITH DEFAULT);
| Assume that you need to insert a row for a new employee into the EMPSAMP
| table. To find out the values for the generated EMPNO, HIRETYPE, and HIREDATE
| columns, use the following SELECT from INSERT statement:
| SELECT EMPNO, HIRETYPE, HIREDATE
| FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, DEPTNO, LEVEL)
| VALUES(’Mary Smith’, 35000.00, 11, ’Associate’));
| The SELECT statement returns the DB2-generated identity value for the EMPNO
| column, the default value ’New Hire’ for the HIRETYPE column, and the value of
| the CURRENT DATE special register for the HIREDATE column.
| The INSERT statement in the FROM clause of the following SELECT statement
| inserts a new employee into the EMPSAMP table:
| SELECT NAME, SALARY
| FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, LEVEL)
| VALUES(’Mary Smith’, 35000.00, ’Associate’));
| The SELECT statement returns a salary of 40000.00 for Mary Smith instead of the
| initial salary of 35000.00 that was explicitly specified in the INSERT statement.
| Example: You can retrieve all the values for a row that is inserted into a structure:
| EXEC SQL SELECT * INTO :empstruct
| FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, DEPTNO, LEVEL)
| VALUES(’Mary Smith’, 35000.00, 11, ’Associate’));
| For this example, :empstruct is a host variable structure that is declared with
| variables for each of the columns in the EMPSAMP table.
| The value 12 satisfies the search condition of the view definition, and the result
| table consists of the value for C1 in the inserted row.
| If you use a value that does not satisfy the search condition of the view definition,
| the insert operation fails, and DB2 returns an error.
| Example: Inserting rows with ROWID values: To see the values of the ROWID
| columns that are inserted into the employee photo and resume table, you can
| declare the following cursor:
| EXEC SQL DECLARE CS1 CURSOR FOR
| SELECT EMP_ROWID
| FROM FINAL TABLE (INSERT INTO DSN8810.EMP_PHOTO_RESUME (EMPNO)
| SELECT EMPNO FROM DSN8810.EMP);
| Example: Using the FETCH FIRST clause: To see only the first five rows that are
| inserted into the employee photo and resume table, use the FETCH FIRST clause:
| EXEC SQL DECLARE CS2 CURSOR FOR
| SELECT EMP_ROWID
| FROM FINAL TABLE (INSERT INTO DSN8810.EMP_PHOTO_RESUME (EMPNO)
| SELECT EMPNO FROM DSN8810.EMP)
| FETCH FIRST 5 ROWS ONLY;
| Example: Using the INPUT SEQUENCE clause: To retrieve rows in the order in
| which they are inserted, use the INPUT SEQUENCE clause:
| EXEC SQL DECLARE CS3 CURSOR FOR
| SELECT EMP_ROWID
| FROM FINAL TABLE (INSERT INTO DSN8810.EMP_PHOTO_RESUME (EMPNO)
| VALUES(:hva_empno)
| FOR 5 ROWS)
| ORDER BY INPUT SEQUENCE;
| Effect on cursor sensitivity: When you declare a scrollable cursor, the cursor
| must be declared with the INSENSITIVE keyword if an INSERT statement is in the
| FROM clause of the cursor specification. The result table is generated during OPEN
| cursor processing and does not reflect any future changes. You cannot declare the
| cursor with the SENSITIVE DYNAMIC or SENSITIVE STATIC keywords. For
| information about cursor sensitivity, see “Using a scrollable cursor” on page 104.
| Example: Assume that your application declares a cursor, opens the cursor,
| performs a fetch, updates the table, and then fetches additional rows:
| EXEC SQL DECLARE CS1 CURSOR FOR
| SELECT SALARY
| FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, LEVEL)
| SELECT NAME, INCOME, BAND FROM OLD_EMPLOYEE);
| EXEC SQL OPEN CS1;
| EXEC SQL FETCH CS1 INTO :hv_salary;
| /* print fetch result */
| ...
| EXEC SQL UPDATE EMPSAMP SET SALARY = SALARY + 500;
| while (SQLCODE == 0) {
| EXEC SQL FETCH CS1 INTO :hv_salary;
| /* print fetch result */
| ...
| }
| The fetches that occur after the update processing return the rows that were
| generated during OPEN cursor processing. However, if you use a simple SELECT
| (with no INSERT statement in the FROM clause), the fetches might return the
| updated values, depending on the access path that DB2 uses.
| Effect of WITH HOLD: When you declare a cursor with the WITH HOLD option,
| and open the cursor, all of the rows are inserted into the target table. The WITH
| HOLD option has no effect on the SELECT from INSERT statement of the cursor
| definition. After your application performs a commit, you can continue to retrieve all
| of the inserted rows. For information about held cursors, see “Held and non-held
| cursors” on page 112.
| Example: Assume that the employee table in the DB2 sample application has five
| rows. Your application declares a WITH HOLD cursor, opens the cursor, fetches two
| rows, performs a commit, and then fetches the third row successfully:
| Example: Assume that your application declares a cursor, sets a savepoint, opens
| the cursor, sets another savepoint, rolls back to the second savepoint, and then
| rolls back to the first savepoint:
| EXEC SQL DECLARE CS3 CURSOR FOR
| SELECT EMP_ROWID
| FROM FINAL TABLE (INSERT INTO DSN8810.EMP_PHOTO_RESUME (EMPNO)
| SELECT EMPNO FROM DSN8810.EMP);
| EXEC SQL SAVEPOINT A ON ROLLBACK RETAIN CURSORS; /* Sets 1st savepoint */
| EXEC SQL OPEN CS3;
| EXEC SQL SAVEPOINT B ON ROLLBACK RETAIN CURSORS; /* Sets 2nd savepoint */
| ...
| EXEC SQL ROLLBACK TO SAVEPOINT B; /* Rows still in DSN8810.EMP_PHOTO_RESUME */
| ...
| EXEC SQL ROLLBACK TO SAVEPOINT A; /* All inserted rows are undone */
| Example: Assume that the employee table of the DB2 sample application has one
| row, and that the SALARY column has a value of 9 999 000.00.
| EXEC SQL SELECT EMPNO INTO :hv_empno
| FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY)
| SELECT FIRSTNAME || MIDINIT || LASTNAME,
| SALARY + 10000.00
| FROM DSN8810.EMP)
| The addition of 10000.00 causes a decimal overflow to occur, and no rows are
| inserted into the EMPSAMP table.
| During OPEN cursor processing: If the insertion of any row fails during the
| OPEN cursor processing, all previously successful insertions are undone. The result
| table of the INSERT is empty.
| During FETCH processing: If the FETCH statement fails while retrieving rows
| from the result table of the insert operation, a negative SQLCODE is returned to the
| application, but the result table still contains the original number of rows that was
| determined during the OPEN cursor processing. At this point, you can undo all of
| the inserts.
You cannot update rows in a created temporary table, but you can update rows in a
declared temporary table.
The SET clause names the columns that you want to update and provides the
values you want to assign to those columns. You can replace a column value in the
SET clause with any of the following items:
v A null value
The column to which you assign the null value must not be defined as NOT
NULL.
v An expression
An expression can be any of the following items:
– A column
– A constant
– A fullselect that returns a scalar
– A host variable
– A special register
In addition, you can replace one or more column values in the SET clause with the
column values in a row that is returned by a fullselect.
If you omit the WHERE clause, DB2 updates every row in the table or view with
the values you supply.
Example: The following statement supplies a missing middle initial and changes the
job for employee 000200.
UPDATE YEMP
SET MIDINIT = ’H’, JOB = ’FIELDREP’
WHERE EMPNO = ’000200’;
The following statement gives everyone in department D11 a raise of 400.00. The
statement can update several rows.
UPDATE YEMP
SET SALARY = SALARY + 400.00
WHERE WORKDEPT = ’D11’;
The following statement sets the salary and bonus for employee 000190 to the
average salary and minimum bonus for all employees.
UPDATE YEMP
SET (SALARY, BONUS) =
(SELECT AVG(SALARY), MIN(BONUS)
FROM EMP)
WHERE EMPNO = ’000190’;
You can use DELETE to remove all rows from a created temporary table or
declared temporary table. However, you can use DELETE with a WHERE clause to
remove only selected rows from a declared temporary table.
This DELETE statement deletes each row in the YEMP table that has an employee
number 000060.
DELETE FROM YEMP
WHERE EMPNO = ’000060’;
When this statement executes, DB2 deletes any row from the YEMP table that
meets the search condition.
If DB2 finds an error while executing your DELETE statement, it stops deleting data
and returns error codes in the SQLCODE and SQLSTATE host variables or related
fields in the SQLCA. The data in the table does not change.
If the statement executes, the table continues to exist (that is, you can insert rows
into it), but it is empty. All existing views and authorizations on the table remain
intact when using DELETE. By comparison, using DROP TABLE drops all views
and authorizations, which can invalidate plans and packages. For information about
the DROP statement, see “Dropping tables: DROP TABLE” on page 25.
DB2 supports the following types of joins: inner join, left outer join, right outer join,
and full outer join. You can specify joins in the FROM clause of a query.
The examples in this section use the following two tables to show various types of
joins:
The PARTS table The PRODUCTS table
PART PROD# SUPPLIER PROD# PRODUCT PRICE
======= ===== ============ ===== =========== =====
WIRE 10 ACWF 505 SCREWDRIVER 3.70
OIL 160 WESTERN_CHEM 30 RELAY 7.55
MAGNETS 10 BATEMAN 205 SAW 18.90
PLASTIC 30 PLASTIK_CORP 10 GENERATOR 45.75
BLADES 205 ACE_STEEL
Figure 2 illustrates how these two tables can be combined using the three outer join
functions.
Figure 2. Three outer joins from the PARTS and PRODUCTS tables
The result table contains data joined from all of the tables, for rows that satisfy the
search conditions.
The result columns of a join have names if the outermost SELECT list refers to
base columns. But, if you use a function (such as COALESCE or VALUE) to build a
column of the result, that column does not have a name unless you use the AS
clause in the SELECT list.
Example: You can join the PARTS and PRODUCTS tables on the PROD# column
to get a table of parts with their suppliers and the products that use the parts.
To do this, you can use either one of the following SELECT statements:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS, PRODUCTS
WHERE PARTS.PROD# = PRODUCTS.PROD#;
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS INNER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
In either case, the number of rows in the result table is the product of the number
of rows in each table.
You can specify more complicated join conditions to obtain different sets of results.
For example, to eliminate the suppliers that begin with the letter A from the table of
parts, suppliers, product numbers and products, write a query like the following
query:
The result of the query is all rows that do not have a supplier that begins with A.
The result table looks like the following output:
PART SUPPLIER PROD# PRODUCT
======= ============ ===== ==========
MAGNETS BATEMAN 10 GENERATOR
PLASTIC PLASTIK_CORP 30 RELAY
The following SQL statement joins table DSN8810.PROJ to itself and returns the
number and name of each major project followed by the number and name of the
project that is part of it:
SELECT A.PROJNO, A.PROJNAME, B.PROJNO, B.PROJNAME
FROM DSN8810.PROJ A, DSN8810.PROJ B
WHERE A.PROJNO = B.MAJPROJ;
In this example, the comma in the FROM clause implicitly specifies an inner join,
and it acts the same as if the INNER JOIN keywords had been used. When you
use the comma for an inner join, you must specify the join condition on the WHERE
clause. When you use the INNER JOIN keywords, you must specify the join
condition on the ON clause.
The join condition for a full outer join must be a simple search condition that
compares two columns or an invocation of a cast function that has a column name
as its argument.
Example: The following query performs a full outer join of the PARTS and
PRODUCTS tables:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
The result table from the query looks similar to the following output:
The product number in the result of the example for “Full outer join” on page 41 is
null for SCREWDRIVER, even though the PRODUCTS table contains a product
number for SCREWDRIVER. If you select PRODUCTS.PROD# instead, PROD# is
null for OIL. If you select both PRODUCTS.PROD# and PARTS.PROD#, the result
contains two columns, both of which contain some null values. You can merge data
from both columns into a single column, eliminating the null values, by using the
COALESCE function.
With the same PARTS and PRODUCTS tables, the following example merges the
non-null data from the PROD# columns:
SELECT PART, SUPPLIER,
COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM, PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
The AS clause (AS PRODNUM) provides a name for the result of the COALESCE
function.
As in an inner join, the join condition can be any simple or compound search
condition that does not contain a subquery reference.
Example: To include rows from the PARTS table that have no matching values in
the PRODUCTS table, and to include prices that exceed $10.00 , run the following
query:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT, PRICE
FROM PARTS LEFT OUTER JOIN PRODUCTS
ON PARTS.PROD#=PRODUCTS.PROD#
AND PRODUCTS.PRICE>10.00;
A row from the PRODUCTS table is in the result table only if its product number
matches the product number of a row in the PARTS table and the price is greater
than $10.00 for that row. Rows in which the PRICE value does not exceed $10.00
are included in the result of the join, but the PRICE value is set to null.
In this result table, the row for PROD# 30 has null values on the right two columns
because the price of PROD# 30 is less than $10.00. PROD# 160 has null values on
the right two columns because PROD# 160 does not match another product
number.
As in an inner join, the join condition can be any simple or compound search
condition that does not contain a subquery reference.
Example: To include rows from the PRODUCTS table that have no corresponding
rows in the PARTS table, execute this query:
SELECT PART, SUPPLIER, PRODUCTS.PROD#, PRODUCT, PRICE
FROM PARTS RIGHT OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#
AND PRODUCTS.PRICE>10.00;
A row from the PARTS table is in the result table only if its product number matches
the product number of a row in the PRODUCTS table and the price is greater than
10.00 for that row.
Because the PRODUCTS table can have rows with nonmatching product numbers
in the result table, and the PRICE column is in the PRODUCTS table, rows in which
PRICE is less than or equal to 10.00 are included in the result. The PARTS
columns contain null values for these rows in the result table.
A join operation is part of a FROM clause; therefore, for the purpose of predicting
which rows will be returned from a SELECT statement containing a join operation,
assume that the join operation is performed first.
Example: Suppose that you want to obtain a list of part names, supplier names,
product numbers, and product names from the PARTS and PRODUCTS tables. You
want to include rows from either table where the PROD# value does not match a
PROD# value in the other table, which means that you need to do a full outer join.
You also want to exclude rows for product number 10. Consider the following
SELECT statement:
SELECT PART, SUPPLIER,
VALUE(PARTS.PROD#,PRODUCTS.PROD#) AS PRODNUM, PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#
WHERE PARTS.PROD# <> ’10’ AND PRODUCTS.PROD# <> ’10’;
DB2 performs the join operation first. The result of the join operation includes rows
from one table that do not have corresponding rows from the other table. However,
the WHERE clause then excludes the rows from both tables that have null values
for the PROD# column.
For this statement, DB2 applies the WHERE clause to each table separately. DB2
then performs the full outer join operation, which includes rows in one table that do
not have a corresponding row in the other table. The final result includes rows with
the null value for the PROD# column and looks similar to the following output:
PART SUPPLIER PRODNUM PRODUCT
======= ============ ======= ===========
OIL WESTERN_CHEM 160 -----------
BLADES ACE_STEEL 205 SAW
PLASTIC PLASTIK_CORP 30 RELAY
------- ------------ 505 SCREWDRIVER
DB2 determines the intermediate and final results of the previous query by
performing the following logical steps:
1. Join the employee and project tables on the employee number, dropping the
rows with no matching employee number in the project table.
2. Join the intermediate result table with the department table on matching
department numbers.
3. Process the select list in the final result table, leaving only four columns.
Using more than one join type: You can use more than one join type in the
FROM clause. Suppose that you want a result table that shows employees whose
last name begins with ’S’ or a letter after ’S’, their department names, and the
projects that they are responsible for, if any. You can use the following SELECT
statement:
SELECT EMPNO, LASTNAME, DEPTNAME, PROJNO
FROM DSN8810.EMP INNER JOIN DSN8810.DEPT
ON WORKDEPT = DSN8810.DEPT.DEPTNO
LEFT OUTER JOIN DSN8810.PROJ
ON EMPNO = RESPEMP
WHERE LASTNAME > ’S’;
DB2 determines the intermediate and final results of the previous query by
performing the following logical steps:
1. Join the employee and department tables on matching department numbers,
dropping the rows where the last name begins with a letter before ’S’.
2. Join the intermediate result table with the project table on the employee number,
keeping the rows with no matching employee number in the project table.
3. Process the select list in the final result table, leaving only four columns.
The correlated references are valid because they do not occur in the table
expression where CHEAP_PARTS is defined. The correlated references are from a
table specification at a higher level in the hierarchy of subqueries.
Example of using a nested table expression as the left operand of a join: The
following query contains a fullselect as the left operand of a left outer join with the
PRODUCTS table. The correlation name is PARTX.
SELECT PART, SUPPLIER, PRODNUM, PRODUCT
FROM (SELECT PART, PROD# AS PRODNUM, SUPPLIER
FROM PARTS
WHERE PROD# < ’200’) AS PARTX
LEFT OUTER JOIN PRODUCTS
ON PRODNUM = PROD#;
Example: Using a table function as an operand of a join: You can join the
results of a user-defined table function with a table, just as you can join two tables.
For example, suppose CVTPRICE is a table function that converts the prices in the
PRODUCTS table to the currency you specify and returns the PRODUCTS table
with the prices in those units. You can obtain a table of parts, suppliers, and product
prices with the prices in your choice of currency by executing a query similar to the
following query:
SELECT PART, SUPPLIER, PARTS.PROD#, Z.PRODUCT, Z.PRICE
FROM PARTS, TABLE(CVTPRICE(:CURRENCY)) AS Z
WHERE PARTS.PROD# = Z.PROD#;
Example: In this example, the correlated reference T.C2 is valid because the table
specification, to which it refers, T, is to its left.
SELECT T.C1, Z.C5
FROM T, TABLE(TF3(T.C2)) AS Z
WHERE T.C3 = Z.C4;
If you specify the join in the opposite order, with T following TABLE(TF3(T.C2), then
T.C2 is invalid.
Example: In this example, the correlated reference D.DEPTNO is valid because the
nested table expression within which it appears is preceded by TABLE and the table
specification D appears to the left of the nested table expression in the FROM
clause.
SELECT D.DEPTNO, D.DEPTNAME,
EMPINFO.AVGSAL, EMPINFO.EMPCOUNT
FROM DEPT D,
TABLE(SELECT AVG(E.SALARY) AS AVGSAL,
COUNT(*) AS EMPCOUNT
FROM EMP E
WHERE E.WORKDEPT=D.DEPTNO) AS EMPINFO;
Conceptual overview
Suppose that you want a list of the employee numbers, names, and commissions of
all employees working on a particular project, whose project number is MA2111.
The first part of the SELECT statement is easy to write:
SELECT EMPNO, LASTNAME, COMM
FROM DSN8810.EMP
WHERE EMPNO
.
.
.
But you cannot proceed because the DSN8810.EMP table does not include project
number data. You do not know which employees are working on project MA2111
without issuing another SELECT statement against the DSN8810.EMPPROJACT
table.
To better understand the results of this SQL statement, imagine that DB2 goes
through the following process:
1. DB2 evaluates the subquery to obtain a list of EMPNO values:
(SELECT EMPNO
FROM DSN8810.EMPPROJACT
WHERE PROJNO = ’MA2111’);
The result is in an interim result table, similar to the one shown in the following
output:
from EMPNO
=====
200
200
220
2. The interim result table then serves as a list in the search condition of the outer
SELECT. Effectively, DB2 executes this statement:
SELECT EMPNO, LASTNAME, COMM
FROM DSN8810.EMP
WHERE EMPNO IN
(’000200’, ’000220’);
This kind of subquery is uncorrelated. In the previous query, for example, the
content of the subquery is the same for every row of the table DSN8810.EMP.
Subqueries that vary in content from row to row or group to group are correlated
subqueries. For information about correlated subqueries, see “Using correlated
subqueries” on page 53. All of the following information that precedes the section
about correlated subqueries applies to both correlated and uncorrelated subqueries.
Subqueries can also appear in the predicates of other subqueries. Such subqueries
are nested subqueries at some level of nesting. For example, a subquery within a
subquery within an outer SELECT has a nesting level of 2. DB2 allows nesting
down to a level of 15, but few queries require a nesting level greater than 1.
The relationship of a subquery to its outer SELECT is the same as the relationship
of a nested subquery to a subquery, and the same rules apply, except where
otherwise noted.
Basic predicate
You can use a subquery immediately after any of the comparison operators. If you
do, the subquery can return at most one value. DB2 compares that value with the
value to the left of the comparison operator.
Example: The following SQL statement returns the employee numbers, names, and
salaries for employees whose education level is higher than the average
company-wide education level.
SELECT EMPNO, LASTNAME, SALARY
FROM DSN8810.EMP
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN8810.EMP);
If a subquery that returns one or more null values gives you unexpected results,
see the description of quantified predicates in Chapter 2 of DB2 SQL Reference.
To satisfy this WHERE clause, the column value must be greater than all of the
values that the subquery returns. A subquery that returns an empty result table
satisfies the predicate.
Now suppose that you use the <> operator with ALL in a WHERE clause like this:
WHERE (column1, column1, ... columnn) <> ALL (subquery)
To satisfy this WHERE clause, each column value must be unequal to all of the
values in the corresponding column of the result table that the subquery returns. A
subquery that returns an empty result table satisfies the predicate.
To satisfy this WHERE clause, the value in the expression must be greater than at
least one of the values (that is, greater than the lowest value) that the subquery
returns. A subquery that returns an empty result table does not satisfy the predicate.
Now suppose that you use the = operator with SOME in a WHERE clause like this:
WHERE (column1, column1, ... columnn) = SOME (subquery)
To satisfy this WHERE clause, each column value must be equal to at least one of
the values in the corresponding column of the result table that the subquery returns.
A subquery that returns an empty result table does not satisfy the predicate.
IN keyword
You can use IN to say that the value or values on the left side of the IN operator
must be among the values that are returned by the subquery. Using IN is equivalent
to using = ANY or = SOME.
EXISTS keyword
In the subqueries presented thus far, DB2 evaluates the subquery and uses the
result as part of the WHERE clause of the outer SELECT. In contrast, when you
use the keyword EXISTS, DB2 simply checks whether the subquery returns one or
more rows. Returning one or more rows satisfies the condition; returning no rows
does not satisfy the condition.
Example: The search condition in the following query is satisfied if any project that
is represented in the project table has an estimated start date that is later than 1
January 2005:
The result of the subquery is always the same for every row that is examined for
the outer SELECT. Therefore, either every row appears in the result of the outer
SELECT or none appears. A correlated subquery is more powerful than the
uncorrelated subquery that is used in this example because the result of a
correlated subquery is evaluated for each row of the outer SELECT.
As shown in the example, you do not need to specify column names in the
subquery of an EXISTS clause. Instead, you can code SELECT *. You can also use
the EXISTS keyword with the NOT keyword in order to select rows when the data
or condition you specify does not exist; that is, you can code the following clause:
WHERE NOT EXISTS (SELECT ...);
For this example, you need to use a correlated subquery, which differs from an
uncorrelated subquery. An uncorrelated subquery compares the employee’s
education level to the average of the entire company, which requires looking at the
entire table. A correlated subquery evaluates only the department that corresponds
to the particular employee.
In the subquery, you tell DB2 to compute the average education level for the
department number in the current row. A query that does this follows:
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL
FROM DSN8810.EMP X
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN8810.EMP
WHERE WORKDEPT = X.WORKDEPT);
Consider what happens when the subquery executes for a given row of
DSN8810.EMP. Before it executes, X.WORKDEPT receives the value of the
WORKDEPT column for that row. Suppose, for example, that the row is for
Christine Haas. Her work department is A00, which is the value of WORKDEPT for
that row. Therefore, the following is the subquery that is executed for that row:
(SELECT AVG(EDLEVEL)
FROM DSN8810.EMP
WHERE WORKDEPT = ’A00’);
The subquery produces the average education level of Christine’s department. The
outer SELECT then compares this average to Christine’s own education level. For
some other row for which WORKDEPT has a different value, that value appears in
the subquery in place of A00. For example, in the row for Michael L Thompson, this
value is B01, and the subquery for his row delivers the average education level for
department B01.
The result table produced by the query is similar to the following output:
EMPNO LASTNAME WORKDEPT EDLEVEL
====== ========= ======== =======
000010 HASS A00 18
000030 KWAN C01 20
000070 PULASKI D21 16
000090 HENDERSON E11 16
When you use a correlated reference in a subquery, the correlation name can be
defined in the outer SELECT or in any of the subqueries that contain the reference.
Suppose, for example, that a query contains subqueries A, B, and C, and that A
contains B and B contains C. The subquery C can use a correlation reference that
is defined in B, A, or the outer SELECT.
You can define a correlation name for each table name in a FROM clause. Specify
the correlation name after its table name. Leave one or more blanks between a
table name and its correlation name. You can include the word AS between the
table name and the correlation name to increase the readability of the SQL
statement.
The following example demonstrates the use of a correlated reference in the select
list of a subquery:
UPDATE BP1TBL T1
SET (KEY1, CHAR1, VCHAR1) =
(SELECT VALUE(T2.KEY1,T1.KEY1), VALUE(T2.CHAR1,T1.CHAR1),
VALUE(T2.VCHAR1,T1.VCHAR1)
FROM BP2TBL T2
WHERE (T2.KEY1 = T1.KEY1))
WHERE KEY1 IN
(SELECT KEY1
FROM BP2TBL T3
WHERE KEY2 > 0);
To process this statement, DB2 determines for each project (represented by a row
in the DSN8810.PROJ table) whether or not the combined staffing for that project is
less than 0.5. If it is, DB2 deletes that row from the DSN8810.PROJ table.
To continue this example, suppose DB2 deletes a row in the DSN8810.PROJ table.
You must also delete rows related to the deleted project in the DSN8810.PROJACT
table. To do this, use:
DELETE FROM DSN8810.PROJACT X
WHERE NOT EXISTS
(SELECT *
FROM DSN8810.PROJ
WHERE PROJNO = X.PROJNO);
DB2 determines, for each row in the DSN8810.PROJACT table, whether a row with
the same project number exists in the DSN8810.PROJ table. If not, DB2 deletes the
row in DSN8810.PROJACT.
This example uses a copy of the employee table for the subquery.
To use SPUFI, select SPUFI from the DB2I Primary Option Menu as shown in
Figure 149 on page 496
From then on, when the SPUFI panel displays, the data entry fields on the panel
contain the values that you previously entered. You can specify data set names and
processing options each time the SPUFI panel displays, as needed. Values you do
not change remain in effect.
Enter the output data set name: (Must be a sequential data set)
4 DATA SET NAME..... ===> RESULT
| Fill out the SPUFI panel. You can access descriptions for each of the fields in the
| panel in the DB2I help system. See “DB2I help” on page 495 for more information
| about the DB2I help system.
If you want to change the current default values, specify new values in the fields of
| the panel. All fields must contain a value. The DB2I help system contains detailed
| descriptions of each of the fields of the CURRENT SPUFI DEFAULTS panel.
When you have entered your SPUFI options, press the ENTER key to continue.
SPUFI then processes the next processing option for which you specified YES. If all
other processing options are NO, SPUFI displays the SPUFI panel.
If you press the END key, you return to the SPUFI panel, but you lose all the
changes you made on the SPUFI Defaults panel. If you press ENTER, SPUFI
saves your changes.
On the panel, use the ISPF EDIT program to enter SQL statements that you want
to execute, as shown in Figure 5 on page 61.
Move the cursor to the first input line and enter the first part of an SQL statement.
You can enter the rest of the SQL statement on subsequent lines, as shown in
You can put more than one SQL statement in the input data set. You can put an
SQL statement on one line of the input data set or on more than one line. DB2
executes the statements in the order you placed them in the data set. Do not put
more than one SQL statement on a single line. The first one executes, but DB2
ignores the other SQL statements on the same line.
In your SPUFI input data set, end each SQL statement with the statement
terminator that you specified in the CURRENT SPUFI DEFAULTS panel.
When you have entered your SQL statements, press the END PF key to save the
file and to execute the SQL statements.
Pressing the END PF key saves the data set. You can save the data set and
continue editing it by entering the SAVE command. Saving the data set after every
10 minutes or so of editing is recommended.
Figure 5 shows what the panel looks like if you enter the sample SQL statement,
followed by a SAVE command.
You can bypass the editing step by resetting the EDIT INPUT processing option:
EDIT INPUT ... ===> NO
| However, you can use the CHAR function to explicitly request the result as
| character data. Instead of using G1 as an item in the select list, use CHAR(G1):
| SELECT CHAR(G1) FROM T1;
| The result of the CHAR function is a UTF-8 string (CCSID 1208) that is then
| converted to EBCDIC when the value is returned to SPUFI. The CCSID of the
| converted data depends on the value of the application-encoding BIND option for
| the SPUFI package. In most cases, the SPUFI result is the EBCDIC system
| CCSID.
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons. For example, suppose you choose the character #
as the statement terminator. Then a CREATE TRIGGER statement with embedded
semicolons looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Be careful to choose a character for the SQL terminator that is not used within the
statement.
You can bypass the DB2 processing step by resetting the EXECUTE processing
option:
EXECUTE ..... ===> NO
Your SQL statement might take a long time to execute, depending on how large a
table DB2 must search, or on how many rows DB2 must process. To interrupt
DB2’s processing, press the PA1 key and respond to the prompting message that
What happens to the output data set? This depends on how much of the input data
set DB2 was able to process before you interrupted its processing. DB2 might not
have opened the output data set yet, or the output data set might contain all or part
of the results data that are produced so far.
At the end of the data set are summary statistics that describe the processing of the
input data set as a whole.
For SELECT statements executed with SPUFI, the message “SQLCODE IS 100”
indicates an error-free result. If the message SQLCODE IS 100 is the only result,
DB2 is unable to find any rows that satisfy the condition specified in the statement.
For all other types of SQL statements executed with SPUFI, the message
“SQLCODE IS 0” indicates an error-free result.
Other messages that you could receive from the processing of SQL statements
include:
v The number of rows that DB2 processed, that either:
– Your SELECT statement retrieved
– Your UPDATE statement modified
– Your INSERT statement added to a table
– Your DELETE statement deleted from a table
v Which columns display truncated data because the data was too wide
In addition to these basic requirements, you should also consider the following
special topics:
v Cursors — Chapter 7, “Using a cursor to retrieve a set of rows,” on page 93
discusses how to use a cursor in your application program to select a set of rows
and then process the set either one row at a time or one rowset at a time.
© Copyright IBM Corp. 1983, 2004 69
v DCLGEN — Chapter 8, “Generating declarations for your tables using DCLGEN,”
on page 121 discusses how to use DB2’s declarations generator, DCLGEN, to
obtain accurate SQL DECLARE statements for tables and views.
This section includes information about using SQL in application programs written in
assembler, C, C++, COBOL, Fortran, PL/I, and REXX.
Some of the examples vary from these conventions. Exceptions are noted where
they occur.
For REXX, precede the statement with EXECSQL. If the statement is in a literal string,
enclose it in single or double quotation marks.
Example: Use EXEC SQL and END-EXEC. to delimit an SQL statement in a COBOL
program:
EXEC SQL
an SQL statement
END-EXEC.
You do not need to declare tables or views, but doing so offers advantages. One
advantage is documentation. For example, the DECLARE statement specifies the
structure of the table or view you are working with, and the data type of each
column. You can refer to the DECLARE statement for the column names and data
types in the table or view. Another advantage is that the DB2 precompiler uses your
declarations to make sure that you have used correct column names and data
types in your SQL statements. The DB2 precompiler issues a warning message
when the column names and data types do not correspond to the SQL DECLARE
statements in your program.
For example, the DECLARE TABLE statement for the DSN8810.DEPT table looks
like the following DECLARE statement in COBOL:
EXEC SQL
DECLARE DSN8810.DEPT TABLE
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) NOT NULL,
LOCATION CHAR(16) )
END-EXEC.
When you declare a table or view that contains a column with a distinct type,
declare that column with the source type of the distinct type, rather than with the
distinct type itself. When you declare the column with the source type, DB2 can
check embedded SQL statements that reference that column at precompile time.
A host variable is a data item that is declared in the host language for use within an
SQL statement. Using host variables, you can:
v Retrieve data into the host variable for your application program’s use
v Place data into the host variable to insert into a table or to change the contents
of a row
v Use the data in the host variable when evaluating a WHERE or HAVING clause
v Assign the value that is in the host variable to a special register, such as
CURRENT SQLID and CURRENT DEGREE
| A host variable array is a data array that is declared in the host language for use
| within an SQL statement. Using host variable arrays, you can:
| v Retrieve data into host variable arrays for your application program’s use
| v Place data into host variable arrays to insert rows into a table
To optimize performance, make sure that the host language declaration maps as
closely as possible to the data type of the associated data in the database. For
more performance suggestions, see Part 6, “Additional programming techniques,”
on page 525.
You can use a host variable to represent a data value, but you cannot use it to
represent a table, view, or column name. (You can specify table, view, or column
names at run time using dynamic SQL. See Chapter 24, “Coding dynamic SQL in
application programs,” on page 535 for more information.)
Host variables follow the naming conventions of the host language. A colon (:) must
precede host variables that are used in SQL statements so DB2 can distinguish a
variable name from a column name. A colon must not precede host variables
outside of SQL statements.
For more information about declaring host variables, see the appropriate language
section:
v Assembler: “Declaring host variables” on page 133
v C and C++: “Declaring host variables” on page 147
v COBOL: “Declaring host variables” on page 176
v Fortran: “Declaring host variables” on page 207
v PL/I: “Declaring host variables” on page 217
v REXX: “Using REXX host variables and data types” on page 237.
If you do not know how many rows DB2 will return, or if you expect more than one
row to return, you must use an alternative to the SELECT ... INTO statement. The
DB2 cursor enables an application to return a set of rows and fetch either one row
at a time or one rowset at a time from the result table. For information about using
cursors, see Chapter 7, “Using a cursor to retrieve a set of rows,” on page 93.
Example: Retrieving a single row: Suppose you are retrieving the LASTNAME
and WORKDEPT column values from the DSN8810.EMP table for a particular
employee. You can define a host variable in your program to hold each column and
then name the host variables with an INTO clause, as in the following COBOL
example:
MOVE ’000110’ TO CBLEMPNO.
EXEC SQL
SELECT LASTNAME, WORKDEPT
INTO :CBLNAME, :CBLDEPT
FROM DSN8810.EMP
WHERE EMPNO = :CBLEMPNO
END-EXEC.
Note that the host variable CBLEMPNO is preceded by a colon (:) in the SQL
statement, but it is not preceded by a colon in the COBOL MOVE statement. In the
DATA DIVISION section of a COBOL program, you must declare the host variables
CBLEMPNO, CBLNAME, and CBLDEPT to be compatible with the data types in the
columns EMPNO, LASTNAME, and WORKDEPT of the DSN8810.EMP table.
You can use a host variable to specify a value in a search condition. For this
example, you have defined a host variable CBLEMPNO for the employee number,
so that you can retrieve the name and the work department of the employee whose
number is the same as the value of the host variable, CBLEMPNO; in this case,
000110.
If the SELECT ... INTO statement returns more than one row, an error occurs, and
any data that is returned is undefined and unpredictable.
To prevent undefined and unpredictable data from being returned, you can use the
FETCH FIRST 1 ROW ONLY clause to ensure that only one row is returned. For
example:
EXEC SQL
SELECT LASTNAME, WORKDEPT
INTO :CBLNAME, :CBLDEPT
FROM DSN8810.EMP
FETCH FIRST 1 ROW ONLY
END-EXEC.
| When you specify both the ORDER BY clause and the FETCH FIRST clause,
| ordering is done first and then the first row is returned. This means that the ORDER
| BY clause determines which row is returned. If you specify both the ORDER BY
| clause and the FETCH FIRST clause, ordering is performed on the entire result set
| before the first row is returned.
The following results have column headings that represent the names of the host
variables:
EMP-NUM PERSON-NAME EMP-SAL EMP-RAISE EMP-TTL
======= =========== ======= ========= =======
000220 LUTZ 29840 4476 34316
Example: Specifying summary values in the SELECT clause: You can request
| summary values to be returned from aggregate functions. For example:
MOVE ’D11’ TO DEPTID.
EXEC SQL
SELECT WORKDEPT, AVG(SALARY)
INTO :WORK-DEPT, :AVG-SALARY
FROM DSN8810.EMP
WHERE WORKDEPT = :DEPTID
END-EXEC.
| To insert multiple rows, you can use the form of the INSERT statement that selects
| values from another table or view. You can also use a form of the INSERT
| statement that inserts multiple rows from values that are provided in host variable
| arrays. For more information, see “Inserting multiple rows of data from host variable
| arrays” on page 79.
| Example: The following example inserts a single row into the activity table:
| EXEC SQL
| INSERT INTO DSN8810.ACT
| VALUES (:HV-ACTNO, :HV-ACTKWD, :HV-ACTDESC)
| END-EXEC.
Retrieving data and testing the indicator variable: When DB2 retrieves the value
of a column into a host variable, you can test the indicator variable that is
associated with that host variable:
v If the value of the indicator variable is less than zero, the column value is null.
The value of the host variable does not change from its previous value. If it is
null because of a numeric or character conversion error, or an arithmetic
expression error, DB2 sets the indicator variable to -2. See “Handling arithmetic
or conversion errors” on page 84 for more information.
v If the indicator variable contains a positive integer, the retrieved value is
truncated, and the integer is the original length of the string.
v If the value of the indicator variable is zero, the column value is nonnull. If the
column value is a character string, the retrieved value is not truncated.
An error occurs if you do not use an indicator variable and DB2 retrieves a null
value.
You can specify an indicator variable, preceded by a colon, immediately after the
host variable. Optionally, you can use the word INDICATOR between the host
You can then test INDNULL for a negative value. If it is negative, the corresponding
value of PHONENO is null, and you can disregard the contents of CBLPHONE.
When you use a cursor to fetch a column value, you can use the same technique to
determine whether the column value is null.
Inserting null values into columns by using host variable indicators: You can
use an indicator variable to insert a null value from a host variable into a column.
When DB2 processes INSERT and UPDATE statements, it checks the indicator
variable (if one exists). If the indicator variable is negative, the column value is null.
If the indicator variable is greater than -1, the associated host variable contains a
value for the column.
For example, suppose your program reads an employee ID and a new phone
number, and must update the employee table with the new number. The new
number could be missing if the old number is incorrect, but a new number is not yet
available. If the new value for column PHONENO might be null, you can use an
indicator variable in the UPDATE statement. For example:
EXEC SQL
UPDATE DSN8810.EMP
SET PHONENO = :NEWPHONE:PHONEIND
WHERE EMPNO = :EMPID
END-EXEC.
Testing for a null column value: You cannot determine whether a column value is
null by comparing it to a host variable with an indicator variable that is set to -1. To
| test whether a column has a null value, use the IS NULL predicate or the IS
| DISTINCT FROM predicate. For example, the following code does not select the
employees who have no phone number:
MOVE -1 TO PHONE-IND.
EXEC SQL
SELECT LASTNAME
INTO :PGM-LASTNAME
FROM DSN8810.EMP
WHERE PHONENO = :PHONE-HV:PHONE-IND
END-EXEC.
You can use the IS NULL predicate to select employees who have no phone
number, as in the following statement:
| To select employees whose phone numbers are equal to the value of :PHONE-HV
| and employees who have no phone number (as in the second example), you would
| need to code two predicates, one to handle the non-null values and another to
| handle the null values, as in the following statement:
| EXEC SQL
| SELECT LASTNAME
| INTO :PGM-LASTNAME
| FROM DSN8810.EMP
| WHERE (PHONENO = :PHONE-HV AND PHONENO IS NOT NULL AND :PHONE-HV IS NOT NULL)
| OR
| (PHONENO IS NULL AND :PHONE-HV:PHONE-IND IS NULL)
| END-EXEC.
| You can simplify the preceding example by coding the statement using the NOT
| form of the IS DISTINCT FROM predicate, as in the following statement:
| EXEC SQL
| SELECT LASTNAME
| INTO :PGM-LASTNAME
| FROM DSN8810.EMP
| WHERE PHONENO IS NOT DISTINCT FROM :PHONE-HV:PHONE-IND
| END-EXEC.
When you use a DECLARE VARIABLE statement in a program, put the DECLARE
VARIABLE statement after the corresponding host variable declaration and before
your first reference to that host variable.
Because the application encoding scheme for the subsystem is EBCDIC, the
retrieved data is EBCDIC. To make the retrieved data Unicode, use DECLARE
VARIABLE statements to specify that the data that is retrieved from these columns
is encoded in the default Unicode CCSIDs for the subsystem. Suppose that you
want to retrieve the character data in Unicode CCSID 1208 and the graphic data in
Unicode CCSID 1200. Use DECLARE VARIABLE statements like these:
EXEC SQL BEGIN DECLARE SECTION;
char hvpartnum[11];
EXEC SQL DECLARE :hvpartnum VARIABLE CCSID 1208;
| sqldbchar hvjpnname[11];
EXEC SQL DECLARE :hvjpnname VARIABLE CCSID 1200;
struct {
short len;
char d[30];
} hvengname;
EXEC SQL DECLARE :hvengname VARIABLE CCSID 1208;
EXEC SQL END DECLARE SECTION;
The BEGIN DECLARE SECTION and END DECLARE SECTION statements mark
the beginning and end of a host variable declare section.
| For more information about declaring host variable arrays, see the appropriate
| language section:
| v C or C++: “Declaring host variable arrays” on page 153
| v COBOL: “Declaring host variable arrays” on page 183
| v PL/I: “Declaring host variable arrays” on page 220
| This section describes the following ways to use host variable arrays:
| v “Retrieving multiple rows of data into host variable arrays”
| v “Inserting multiple rows of data from host variable arrays” on page 79
| v “Using indicator variable arrays with host variable arrays” on page 79
| Example: You can insert the number of rows that are specified in the host variable
| NUM-ROWS by using the following INSERT statement:
| EXEC SQL
| INSERT INTO DSN8810.ACT
| (ACTNO, ACTKWD, ACTDESC)
| VALUES (:HVA1, :HVA2, :HVA3)
| FOR :NUM-ROWS ROWS
| END-EXEC.
| Assume that the host variable arrays HVA1, HVA2, and HVA3 have been declared
| and populated with the values that are to be inserted into the ACTNO, ACTKWD,
| and ACTDESC columns. The NUM-ROWS host variable specifies the number of
| rows that are to be inserted, which must be less than or equal to the dimension of
| each host variable array.
| Retrieving data and using indicator arrays: When you retrieve data into a host
| variable array, if a value in its indicator array is negative, you can disregard the
| contents of the corresponding element in the host variable array. If a value in an
| indicator array is:
| -1 The corresponding row in the column that is being retrieved is null.
| -2 DB2 returns a null value because an error occurred in numeric conversion
| or in an arithmetic expression in the corresponding row.
| -3 DB2 returns a null value because a hole was detected for the
| corresponding row during a multiple-row FETCH operation.
| For information about the multiple-row FETCH operation, see “Step 4: Execute SQL
| statements with a rowset cursor” on page 99. For information about holes in the
| result table of a cursor, see “Holes in the result table of a scrollable cursor” on page
| 109.
| Example: Suppose that you declare a scrollable rowset cursor by using the
| following statement:
| For information about using rowset cursors, see “Accessing data by using a
| rowset-positioned cursor” on page 98.
| After the multiple-row FETCH statement, you can test each element of the
| INDNULL array for a negative value. If an element is negative, you can disregard
| the contents of the corresponding element in the CBLPHONE host variable array.
| Inserting null values by using indicator arrays: You can use a negative value in
| an indicator array to insert a null value into a column.
| Example: Assume that host variable arrays hva1 and hva2 have been populated
| with values that are to be inserted into the ACTNO and ACTKWD columns. Assume
| the ACTDESC column allows nulls. To set the ACTDESC column to null, assign -1
| to the elements in its indicator array:
| /* Initialize each indicator array */
| for (i=0; i<10; i++) {
| ind1[i] = 0;
| ind2[i] = 0;
| ind3[i] = -1;
| }
|
| EXEC SQL
| INSERT INTO DSN8810.ACT
| (ACTNO, ACTKWD, ACTDESC)
| VALUES (:hva1:ind1, :hva2:ind2, :hva3:ind3)
| FOR 10 ROWS;
| DB2 ignores the values in the hva3 array and assigns the values in the ARTDESC
| column to null for the 10 rows that are inserted.
You can declare a host structure yourself, or you can use DCLGEN to generate a
COBOL record description, PL/I structure declaration, or C structure declaration that
corresponds to the columns of a table. For more detailed information about coding
a host structure in your program, see Chapter 9, “Embedding SQL statements in
host languages,” on page 129. For more information about using DCLGEN and the
restrictions that apply to the C language, see Chapter 8, “Generating declarations
for your tables using DCLGEN,” on page 121.
In this example, EMP-IND is an array containing six values, which you can test for
negative values. If, for example, EMP-IND(6) contains a negative value, the
corresponding host variable in the host structure (EMP-BIRTHDATE) contains a null
value.
Because this example selects rows from the table DSN8810.EMP, some of the
values in EMP-IND are always zero. The first four columns of each row are defined
NOT NULL. In the preceding example, DB2 selects the values for a row of data into
a host structure. You must use a corresponding structure for the indicator variables
to determine which (if any) selected column values are null. For information on
using the IS NULL keyword phrase in WHERE clauses, see “Selecting rows using
search conditions: WHERE” on page 8.
See Appendix C of DB2 SQL Reference for a description of all the fields in the
SQLCA.
The meaning of SQLCODEs other than 0 and 100 varies with the particular product
implementing SQL.
You can declare SQLCODE and SQLSTATE (SQLCOD and SQLSTA in Fortran) as
stand-alone host variables. If you specify the STDSQL(YES) precompiler option,
these host variables receive the return codes, and you should not include an
SQLCA in your program.
The WHENEVER statement is not supported for REXX. For information on REXX
error handling, see “Embedding SQL statements in a REXX procedure” on page
235.
The WHENEVER statement must precede the first SQL statement it is to affect.
However, if your program checks SQLCODE directly, you must check SQLCODE
after each SQL statement.
For rows in which a conversion or arithmetic expression error does occur, the
indicator variable indicates that one or more selected items have no meaningful
value. The indicator variable flags this error with a -2 for the affected host variable
and an SQLCODE of +802 (SQLSTATE ’01519’) in the SQLCA.
| Use the GET DIAGNOSTICS statement to handle multiple SQL errors that might
| result from the execution of a single SQL statement. First, check SQLSTATE (or
| SQLCODE) to determine whether diagnostic information should be retrieved by
| Even if you use only the GET DIAGNOSTICS statement in your application program
| to check for conditions, you must either include the instructions required to use the
| SQLCA or you must declare SQLSTATE (or SQLCODE) separately in your program.
| To retrieve condition information, you must first retrieve the number of condition
| items (that is, the number of errors and warnings that DB2 detected during the
| execution of the last SQL statement). The number of condition items is at least one.
| If the last SQL statement returned SQLSTATE ’00000’ (or SQLCODE 0), the
| number of condition items is one.
| In Figure 7 on page 86, the first GET DIAGNOSTICS statement returns the number
| of rows inserted and the number of conditions returned. The second GET
| DIAGNOSTICS statement returns the following items for each condition:
| SQLCODE, SQLSTATE, and the number of the row (in the rowset that was being
| inserted) for which the condition occurred.
|
Figure 7. Using GET DIAGNOSTICS to return the number of rows and conditions returned
and condition information
| In the activity table, the ACTNO column is defined as SMALLINT. Suppose that you
| declare the host variable array hva1 as an array with data type long, and you
| populate the array so that the value for the fourth element is 32768.
| If you check the SQLCA values after the INSERT statement, the value of
| SQLCODE is equal to 0, the value of SQLSTATE is ’00000’, and the value of
| SQLERRD(3) is 9 for the number of rows that were inserted. However, the INSERT
| statement specified that 10 rows were to be inserted.
| The GET DIAGNOSTICS statement provides you with the information that you need
| to correct the data for the row that was not inserted. The printed output from your
| program looks like this:
| Number of rows inserted = 9
| SQLCODE = -302, SQLSTATE = 22003, ROW NUMBER = 4
| The value 32768 for the input variable is too large for the target column ACTNO.
| You can print the MESSAGE_TEXT condition item, or see DB2 Messages and
| Codes for information about SQLCODE -302.
| For a complete description of the GET DIAGNOSTICS items, see Chapter 5 of DB2
| SQL Reference.
You can find the programming language-specific syntax and details for calling
DSNTIAR on the following pages:
For Assembler programs, see page 142
For C programs, see page 169
For COBOL programs, see page 201
For Fortran programs, see page 212
For PL/I programs, see page 230
DSNTIAR takes data from the SQLCA, formats it into a message, and places the
result in a message output area that you provide in your application program. Each
time you use DSNTIAR, it overwrites any previous messages in the message output
area. You should move or print the messages before using DSNTIAR again, and
before the contents of the SQLCA change, to get an accurate view of the SQLCA.
Figure 8 shows the format of the message output area, where length is the 2-byte
total length field, and the length of each line matches the logical record length (lrecl)
you specify to DSNTIAR.
When you call DSNTIAR, you must name an SQLCA and an output message area
in the DSNTIAR parameters. You must also provide the logical record length (lrecl)
as a value between 72 and 240 bytes. DSNTIAR assumes the message area
contains fixed-length records of length lrecl.
When loading DSNTIAR from another program, be careful how you branch to
DSNTIAR. For example, if the calling program is in 24-bit addressing mode and
DSNTIAR is loaded above the 16-MB line, you cannot use the assembler BALR
instruction or CALL macro to call DSNTIAR, because they assume that DSNTIAR is
in 24-bit mode. Instead, you must use an instruction that is capable of branching
into 31-bit mode, such as BASSM.
You can dynamically link (load) and call DSNTIAR directly from a language that
does not handle 31-bit addressing (OS/VS COBOL, for example). To do this, link a
second version of DSNTIAR with the attributes AMODE(24) and RMODE(24) into
another load module library. Alternatively, you can write an intermediate assembler
language program that calls DSNTIAR in 31-bit mode and then call that
intermediate program in 24-bit mode from your application.
For more information on the allowed and default AMODE and RMODE settings for a
particular language, see the application programming guide for that language. For
details on how the attributes AMODE and RMODE of an application are determined,
see the linkage editor and loader user’s guide for the language in which you have
written the application.
In your error routine, you write a section that checks for SQLCODE -911 or -913.
You can receive either of these SQLCODEs when a deadlock or timeout occurs.
When one of these errors occurs, the error routine closes your cursors by issuing
the statement:
EXEC SQL CLOSE cursor-name
An SQLCODE of 0 or -501 resulting from that statement indicates that the close
was successful.
To use DSNTIAR to generate the error message text, first follow these steps:
1. Choose a logical record length (lrecl) of the output lines. For this example,
assume lrecl is 72 (to fit on a terminal screen) and is stored in the variable
named ERROR-TEXT-LEN.
2. Define a message area in your COBOL application. Assuming you want an area
for up to 10 lines of length 72, you should define an area of 720 bytes, plus a
2-byte area that specifies the total length of the message output area.
01 ERROR-MESSAGE.
02 ERROR-LEN PIC S9(4) COMP VALUE +720.
02 ERROR-TEXT PIC X(72) OCCURS 10 TIMES
INDEXED BY ERROR-INDEX.
77 ERROR-TEXT-LEN PIC S9(9) COMP VALUE +72.
To display the contents of the SQLCA when SQLCODE is 0 or -501, call DSNTIAR
after the SQL statement that produces SQLCODE 0 or -501:
CALL ’DSNTIAR’ USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.
You can then print the message output area just as you would any other variable.
Your message might look like this:
DSNT408I SQLCODE = -501, ERROR: THE CURSOR IDENTIFIED IN A FETCH OR
CLOSE STATEMENT IS NOT OPEN
DSNT418I SQLSTATE = 24501 SQLSTATE RETURN CODE
DSNT415I SQLERRP = DSNXERT SQL PROCEDURE DETECTING ERROR
DSNT416I SQLERRD = -315 0 0 -1 0 0 SQL DIAGNOSTIC INFORMATION
DSNT416I SQLERRD = X’FFFFFEC5’ X’00000000’ X’00000000’
X’FFFFFFFF’ X’00000000’ X’00000000’ SQL DIAGNOSTIC
INFORMATION
| When you execute a SELECT statement, you retrieve a set of rows. That set of
| rows is called the result table of the SELECT statement. In an application program,
| you can use either of the following types of cursors to retrieve rows from a result
| table:
| v A row-positioned cursor retrieves at most a single row at a time from the result
| table into host variables. At any point in time, the cursor is positioned on at most
| a single row. For information about how to use a row-positioned cursor, see
| “Accessing data by using a row-positioned cursor.”
| v A rowset-positioned cursor retrieves zero, one, or more rows at a time, as a
| rowset, from the result table into host variable arrays. At any point in time, the
| cursor can be positioned on a rowset. You can reference all of the rows in the
| rowset, or only one row in the rowset, when you use a positioned DELETE or
| positioned UPDATE statement. For information about how to use a
| rowset-positioned cursor, see “Accessing data by using a rowset-positioned
| cursor” on page 98.
Your program can have several cursors, each of which performs the previous steps.
The following example shows a simple form of the DECLARE CURSOR statement:
You can use this cursor to list select information about employees.
More complicated cursors might include WHERE clauses or joins of several tables.
For example, suppose that you want to use a cursor to list employees who work on
a certain project. Declare a cursor like this to identify those employees:
EXEC SQL
DECLARE C2 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8810.EMP X
WHERE EXISTS
(SELECT *
FROM DSN8810.PROJ Y
WHERE X.EMPNO=Y.RESPEMP
AND Y.PROJNO=:GOODPROJ);
| Declaring cursors for tables that use multilevel security: You can declare a
| cursor that retrieves rows from a table that uses multilevel security with row-level
| granularity. However, the result table for the cursor contains only those rows that
| have a security label value that is equivalent to or dominated by the security label
| value of your ID. Refer to Part 3 (Volume 1) of DB2 Administration Guide for a
| discussion of multilevel security with row-level granularity.
Updating a column: You can update columns in the rows that you retrieve.
Updating a row after you use a cursor to retrieve it is called a positioned update. If
you intend to perform any positioned updates on the identified table, include the
FOR UPDATE clause. The FOR UPDATE clause has two forms:
v The first form is FOR UPDATE OF column-list. Use this form when you know in
advance which columns you need to update.
v The second form is FOR UPDATE, with no column list. Use this form when you
might use the cursor to update any of the columns of the table.
For example, you can use this cursor to update only the SALARY column of the
employee table:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8810.EMP X
WHERE EXISTS
(SELECT *
FROM DSN8810.PROJ Y
WHERE X.EMPNO=Y.RESPEMP
AND Y.PROJNO=:GOODPROJ)
FOR UPDATE OF SALARY;
If you might use the cursor to update any column of the employee table, define the
cursor like this:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8810.EMP X
WHERE EXISTS
(SELECT *
DB2 must do more processing when you use the FOR UPDATE clause without a
column list than when you use the FOR UPDATE clause with a column list.
Therefore, if you intend to update only a few columns of a table, your program can
run more efficiently if you include a column list.
The precompiler options NOFOR and STDSQL affect the use of the FOR UPDATE
clause in static SQL statements. For information about these options, see Table 63
on page 462. If you do not specify the FOR UPDATE clause in a DECLARE
CURSOR statement, and you do not specify the STDSQL(YES) option or the
NOFOR precompiler options, you receive an error if you execute a positioned
UPDATE statement.
You can update a column of the identified table even though it is not part of the
result table. In this case, you do not need to name the column in the SELECT
statement. When the cursor retrieves a row (using FETCH) that contains a column
value you want to update, you can use UPDATE ... WHERE CURRENT OF to
identify the row that is to be updated.
Read-only result table: Some result tables cannot be updated—for example, the
result of joining two or more tables. The defining characteristics of a read-only result
tables are described in greater detail in the discussion of DECLARE CURSOR in
Chapter 5 of DB2 SQL Reference.
Two factors that influence the amount of time that DB2 requires to process the
OPEN statement are:
v Whether DB2 must perform any sorts before it can retrieve rows
v Whether DB2 uses parallelism to process the SELECT statement of the cursor
For more information, see “The effect of sorts on OPEN CURSOR” on page 772.
Your program must anticipate and handle an end-of-data whenever you use a
cursor to fetch a row. For more information about the WHENEVER NOT FOUND
statement, see “Checking the execution of SQL statements” on page 82.
The SELECT statement within DECLARE CURSOR statement identifies the result
table from which you fetch rows, but DB2 does not retrieve any data until your
application program executes a FETCH statement.
When your program executes the FETCH statement, DB2 positions the cursor on a
row in the result table. That row is called the current row. DB2 then copies the
current row contents into the program host variables that you specify on the INTO
clause of FETCH. This sequence repeats each time you issue FETCH, until you
process all rows in the result table.
The row that DB2 points to when you execute a FETCH statement depends on
whether the cursor is declared as a scrollable or non-scrollable. See “Scrollable and
non-scrollable cursors” on page 103 for more information.
When you query a remote subsystem with FETCH, consider using block fetch for
better performance. For more information see “Use block fetch” on page 440. Block
fetch processes rows ahead of the current row. You cannot use a block fetch when
you perform a positioned update or delete operation.
A positioned UPDATE statement updates the row on which the cursor is positioned.
A positioned DELETE statement deletes the row on which the cursor is positioned.
When you finish processing the rows of the result table, and the cursor is no longer
needed, you can let DB2 automatically close the cursor when the current
transaction terminates or when your program terminates.
Recommendation: To free the resources that are held by the cursor, close the
cursor explicitly by issuing the CLOSE statement.
| Your program can have several cursors, each of which performs the previous steps.
| To determine the number of retrieved rows, use either of the following values:
| v The contents of the SQLERRD(3) field in the SQLCA
| v The contents of the ROW_COUNT item of GET DIAGNOSTICS
| For information about GET DIAGNOSTICS, see “Using the GET DIAGNOSTICS
| statement” on page 84.
| If you declare the cursor as dynamic scrollable, and SQLCODE has the value 100,
| you can continue with a FETCH statement until no more rows are retrieved.
| Additional fetches might retrieve more rows because a dynamic scrollable cursor is
| sensitive to updates by other application processes. For information about dynamic
| cursors, see “Types of cursors” on page 103.
| You must use the WITH ROWSET POSITIONING clause of the DECLARE
| CURSOR statement if you plan to use a rowset-positioned FETCH statement.
| When your program executes a FETCH statement with the ROWSET keyword, the
| cursor is positioned on a rowset in the result table. That rowset is called the current
| rowset. The dimension of each of the host variable arrays must be greater than or
| equal to the number of rows to be retrieved.
| Declare the SQLDA: You must first declare the SQLDA structure. The following
| SQL INCLUDE statement requests a standard SQLDA declaration:
| EXEC SQL INCLUDE SQLDA;
| Your program must also declare variables that reference the SQLDA structure, the
| SQLVAR structure within the SQLDA, and the DECLEN structure for the precision
| and scale if you are retrieving a DECIMAL column. For C programs, the code looks
| like this:
| struct sqlda *sqldaptr;
| struct sqlvar *varptr;
| struct DECLEN {
| unsigned char precision;
| unsigned char scale;
| };
| Allocate the SQLDA: Before you can set the fields in the SQLDA for the column
| values to be retrieved, you must dynamically allocate storage for the SQLDA
| structure. For C programs, the code looks like this:
| sqldaptr = (struct sqlda *) malloc (3 * 44 + 16);
| The size of the SQLDA is SQLN * 44 + 16, where the value of the SQLN field is the
| number of output columns.
| Set the fields in the SQLDA: You must set the fields in the SQLDA structure for
| your FETCH statement. Suppose you want to retrieve the columns EMPNO,
| LASTNAME, and SALARY. The C code to set the SQLDA fields for these columns
| looks like this:
| strcpy(sqldaptr->sqldaid,"SQLDA");
| sqldaptr->sqldabc = 148; /* number bytes of storage allocated for the SQLDA */
| sqldaptr->sqln = 3; /* number of SQLVAR occurrences */
| sqldaptr->sqld = 3;
| varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0])); /* Point to first SQLVAR */
| varptr->sqltype = 452; /* data type CHAR(6) */
| varptr->sqllen = 6;
| varptr->sqldata = (char *) hva1;
| varptr->sqlind = (short *) inda1;
| varptr->sqlname.length = 8;
| varptr->sqlname.data = X’0000000000000014’; /* bytes 5-8 array size */
| varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 1); /* Point to next SQLVAR */
| varptr->sqltype = 448; /* data type VARCHAR(15) */
| varptr->sqllen = 15;
| varptr->sqldata = (char *) hva2;
| varptr->sqlind = (short *) inda2;
| varptr->sqlname.length = 8;
| varptr->sqlname.data = X’0000000000000014’; /* bytes 5-8 array size */
| varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 2); /* Point to next SQLVAR */
| varptr->sqltype = 485; /* data type DECIMAL(9,2) */
| ((struct DECLEN *) &(varptr->sqllen))->precision = 9;
| ((struct DECLEN *) &(varptr->sqllen))->scale = 2;
| varptr->sqldata = (char *) hva3;
| varptr->sqlind = (short *) inda3;
| varptr->sqlname.length = 8;
| varptr->sqlname.data = X’0000000000000014’; /* bytes 5-8 array size */
| For information about using the SQLDA in dynamic SQL, see Chapter 24, “Coding
| dynamic SQL in application programs,” on page 535. For a complete layout of the
| SQLDA and the descriptions given by the INCLUDE statement, see Appendix C of
| DB2 SQL Reference.
| Open the cursor: You can open the cursor only after all of the fields have been
| set in the output SQLDA:
| EXEC SQL OPEN C1;
| Fetch the rows: After the OPEN statement, the program fetches the next rowset:
| EXEC SQL
| FETCH NEXT ROWSET FROM C1
| FOR 20 ROWS
| USING DESCRIPTOR :*sqldaptr;
| The USING clause of the FETCH statement names the SQLDA that describes the
| columns that are to be retrieved.
| When the UPDATE statement is executed, the cursor must be positioned on a row
| or rowset of the result table. If the cursor is positioned on a row, that row is
| updated. If the cursor is positioned on a rowset, all of the rows in the rowset are
| updated.
| When the DELETE statement is executed, the cursor must be positioned on a row
| or rowset of the result table. If the cursor is positioned on a row, that row is deleted,
| and the cursor is positioned before the next row of its result table. If the cursor is
| positioned on a rowset, all of the rows in the rowset are deleted, and the cursor is
| positioned before the next rowset of its result table.
| When you finish processing the rows of the result table, and you no longer need the
| cursor, you can let DB2 automatically close the cursor when the current transaction
| terminates or when your program terminates.
| Recommendation: To free the resources held by the cursor, close the cursor
| explicitly by issuing the CLOSE statement.
Types of cursors
| You can declare cursors, both row-positioned and rowset-positioned, as scrollable
or not scrollable, held or not held, and returnable or not returnable. The following
sections discuss these characteristics:
v “Scrollable and non-scrollable cursors”
v “Held and non-held cursors” on page 112
A non-scrollable cursor always moves sequentially forward in the result table. When
the application opens the cursor, the cursor is positioned before the first row (or first
rowset) in the result table. When the application executes the first FETCH, the
cursor is positioned on the first row (or first rowset). When the application executes
subsequent FETCH statements, the cursor moves one row ahead (or one rowset
ahead) for each FETCH. After each FETCH statement, the cursor is positioned on
the row (or rowset) that was fetched.
If you want to order the rows of the cursor’s result set, and you also want the cursor
to be updatable, you need to declare the cursor as scrollable, even if you use it
only to retrieve rows (or rowsets) sequentially. You can use the ORDER BY clause
in the declaration of an updatable cursor only if you declare the cursor as scrollable.
Declaring a scrollable cursor with the INSENSITIVE keyword has the following
effects:
v The size, the order of the rows, and the values for each row of the result table do
not change after the application opens the cursor.
v The result table is read-only. Therefore, you cannot declare the cursor with the
FOR UPDATE clause, and you cannot use the cursor for positioned update or
delete operations.
Static scrollable cursor: Both the INSENSITIVE cursor and the SENSITIVE
STATIC cursor follow the static cursor model:
v The size of the result table does not grow after the application opens the cursor.
Rows that are inserted into the underlying table are not added to the result table.
v The order of the rows does not change after the application opens the cursor.
If the cursor declaration contains an ORDER BY clause, and the columns that
are in the ORDER BY clause are updated after the cursor is opened, the order of
the rows in the result table does not change.
| Dynamic scrollable cursor: When you declare a cursor as SENSITIVE, you can
| declare it either STATIC or DYNAMIC. The SENSITIVE DYNAMIC cursor follows
| the dynamic cursor model:
| v The size and contents of the result table can change with every fetch.
| The base table can change while the cursor is scrolling on it. If another
| application process changes the data, the cursor sees the newly changed data
| when it is committed. If the application process of the cursor changes the data,
| the cursor sees the newly changed data immediately.
| v The order of the rows can change after the application opens the cursor.
If the OPEN statement executes with no errors or warnings, DB2 does not set
SQLWARN0 when it sets SQLWARN1, SQLWARN4, or SQLWARN5. See Appendix
| C of DB2 SQL Reference for specific information about fields in the SQLCA.
| For more information about the GET DIAGNOSTICS statement, see “Using the
| GET DIAGNOSTICS statement” on page 84.
Notes:
| 1. The cursor position applies to both row position and rowset position, for example, before
| the first row or before the first rowset.
2. ABSOLUTE and RELATIVE are described in greater detail in the discussion of FETCH in
Chapter 5 of DB2 SQL Reference.
Example: To use the cursor that is declared in Figure 9 on page 104 to fetch the
fifth row of the result table, use a FETCH statement like this:
EXEC SQL FETCH ABSOLUTE +5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
To fetch the fifth row from the end of the result table, use this FETCH statement:
EXEC SQL FETCH ABSOLUTE -5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
Determining the number of rows in the result table for a static scrollable
cursor: You can determine how many rows are in the result table of an
INSENSITIVE or SENSITIVE STATIC scrollable cursor. To do that, execute a
FETCH statement, such as FETCH AFTER, that positions the cursor after the last
row. You can then examine the fields SQLERRD(1) and SQLERRD(2) in the
SQLCA (fields sqlerrd[0] and sqlerrd[1] for C and C++) for the number of rows in
| the result table. Alternatively, you can use the GET DIAGNOSTICS statement to
| retrieve the number of rows in the ROW_COUNT statement item.
| FETCH statement interaction between row and rowset positioning: When you
| declare a cursor with the WITH ROWSET POSITIONING clause, you can intermix
| row-positioned FETCH statements with rowset-positioned FETCH statements. For
| information about using a multiple-row FETCH statement, see “Using a multiple-row
| FETCH statement with host variable arrays” on page 99.
| Table 9 summarizes the sensitivity values and their effects on the result table of a
| scrollable cursor.
| Table 9. How sensitivity affects the result table for a scrollable cursor
| DECLARE
| sensitivity FETCH INSENSITIVE FETCH SENSITIVE
| INSENSITIVE No changes to the underlying Not valid.
| table are visible in the result
| table. Positioned UPDATE and
| DELETE statements using the
| cursor are not allowed.
| SENSITIVE STATIC Only positioned updates and All updates and deletes are visible
| deletes that are made by the in the result table. Inserts made by
| cursor are visible in the result other processes are not visible in
| table. the result table.
| SENSITIVE Not valid. All committed changes are visible
| DYNAMIC in the result table, including
| updates, deletes, inserts, and
| changes in the order of the rows.
|
The following examples demonstrate how delete and update holes can occur when
you use a SENSITIVE STATIC scrollable cursor.
Creating a delete hole with a static scrollable cursor: Suppose that table A
consists of one integer column, COL1, which has the values shown in Figure 12 on
page 110.
Now suppose that you declare the following SENSITIVE STATIC scrollable cursor,
which you use to delete rows from A:
EXEC SQL DECLARE C3 SENSITIVE STATIC SCROLL CURSOR FOR
SELECT COL1
FROM A
FOR UPDATE OF COL1;
The positioned delete statement creates a delete hole, as shown in Figure 13.
After you execute the positioned delete statement, the third row is deleted from the
result table, but the result table does not shrink to fill the space that the deleted row
creates.
Creating an update hole with a static scrollable cursor: Suppose that you
declare the following SENSITIVE STATIC scrollable cursor, which you use to update
rows in A:
EXEC SQL DECLARE C4 SENSITIVE STATIC SCROLL CURSOR FOR
SELECT COL1
FROM A
WHERE COL1<6;
After you execute the searched UPDATE statement, the last row no longer qualifies
for the result table, but the result table does not shrink to fill the space that the
disqualified row creates.
Removing a delete hole or an update hole: You can remove a delete hole or an
update hole in specific situations.
If you try to fetch from a delete hole, DB2 issues an SQL warning. If you try to
update or delete a delete hole, DB2 issues an SQL error. You can remove a delete
hole only by opening the scrollable cursor, setting a savepoint, executing a
positioned DELETE statement with the scrollable cursor, and rolling back to the
savepoint.
If you try to fetch from an update hole, DB2 issues an SQL warning. If you try to
delete an update hole, DB2 issues an SQL error. However, you can convert an
update hole back to a result table row by updating the row in the base table, as
shown in Figure 15 on page 112. You can update the base table with a searched
UPDATE statement in the same application process, or a searched or positioned
UPDATE statement in another application process. After you update the base table,
if the row qualifies for the result table, the update hole disappears.
If the scrollable cursor creates the hole, the hole is visible when you execute a
FETCH statement for the row that contains the hole. The FETCH statement can be
FETCH INSENSITIVE or FETCH SENSITIVE.
If an update or delete operation outside the scrollable cursor creates the hole, the
hole is visible at the following times:
v If you execute a FETCH SENSITIVE statement for the row that contains the hole,
the hole is visible when you execute the FETCH statement.
v If you execute a FETCH INSENSITIVE statement, the hole is not visible when
you execute the FETCH statement. DB2 returns the row as it was before the
update or delete operation occurred. However, if you follow the FETCH
INSENSITIVE statement with a positioned UPDATE or DELETE statement, the
hole becomes visible.
| After a commit operation, the position of a held cursor depends on its type:
| v A non-scrollable cursor that is held is positioned after the last retrieved row and
| before the next logical row. The next row can be returned from the result table
| with a FETCH NEXT statement.
If the program abnormally terminates, the cursor position is lost. To prepare for
restart, your program must reposition the cursor.
The following restrictions apply to cursors that are declared WITH HOLD:
v Do not use DECLARE CURSOR WITH HOLD with the new user signon from a
DB2 attachment facility, because all open cursors are closed.
v Do not declare a WITH HOLD cursor in a thread that might become inactive. If
you do, its locks are held indefinitely.
IMS
You cannot use DECLARE CURSOR...WITH HOLD in message processing
programs (MPP) and message-driven batch message processing (BMP). Each
message is a new user for DB2; whether or not you declare them using WITH
HOLD, no cursors continue for new users. You can use WITH HOLD in
non-message-driven BMP and DL/I batch programs.
CICS
In CICS applications, you can use DECLARE CURSOR...WITH HOLD to
indicate that a cursor should not close at a commit or sync point. However,
SYNCPOINT ROLLBACK closes all cursors, and end-of-task (EOT) closes all
cursors before DB2 reuses or terminates the thread. Because
pseudo-conversational transactions usually have multiple EXEC CICS
RETURN statements and thus span multiple EOTs, the scope of a held cursor
is limited. Across EOTs, you must reopen and reposition a cursor declared
WITH HOLD, as if you had not specified WITH HOLD.
You should always close cursors that you no longer need. If you let DB2 close
a CICS attachment cursor, the cursor might not close until the CICS
attachment facility reuses or terminates the thread.
The following cursor declaration causes the cursor to maintain its position in the
DSN8810.EMP table after a commit point:
EXEC SQL
DECLARE EMPLUPDT CURSOR WITH HOLD FOR
SELECT EMPNO, LASTNAME, PHONENO, JOB, SALARY, WORKDEPT
Figure 17 on page 116 shows how to retrieve data backward with a cursor.
Figure 17. Performing cursor operations with a SENSITIVE STATIC scrollable cursor
Figure 18 on page 117 shows how to update an entire rowset with a cursor.
|
| Figure 18. Performing positioned update with a rowset cursor
|
Figure 19 on page 118 shows how to update specific rows with a rowset cursor.
|
| Figure 19. Performing positioned update and delete with a sensitive rowset cursor (Part 1 of
| 2)
|
Figure 19. Performing positioned update and delete with a sensitive rowset cursor (Part 2 of
| 2)
You must use DCLGEN before you precompile your program. Supply the table or
view name to DCLGEN before you precompile your program. To use the
declarations generated by DCLGEN in your program, use the SQL INCLUDE
statement. For more information about the INCLUDE statement, see Chapter 5 of
DB2 SQL Reference.
DB2 must be active before you can use DCLGEN. You can start DCLGEN in
several different ways:
v From ISPF through DB2I. Select the DCLGEN option on the DB2I Primary Option
Menu panel.
v Directly from TSO. To do this, sign on to TSO, issue the TSO command DSN,
and then issue the subcommand DCLGEN.
v From a CLIST, running in TSO foreground or background, that issues DSN and
then DCLGEN.
v With JCL. Supply the required information, using JCL, and run DCLGEN in batch.
If you want to start DCLGEN in the foreground, and your table names include
DBCS characters, you must provide and display double-byte characters. If you
do not have a terminal that displays DBCS characters, you can enter DBCS
characters using the hex mode of ISPF edit.
| The DB2I help system contains detailed descriptions of the fields of the DCLGEN
| panel. For more information about the DB2I help system, see “DB2I help” on page
| 495.
If you are using an SQL reserved word as an identifier, you must edit the DCLGEN
output in order to add the appropriate SQL delimiters.
DCLGEN produces output that is intended to meet the needs of most users, but
occasionally, you will need to edit the DCLGEN output to work in your specific case.
For example, DCLGEN is unable to determine whether a column that is defined as
NOT NULL also contains the DEFAULT clause, so you must edit the DCLGEN
output to add the DEFAULT clause to the appropriate column definitions.
Notes:
1. For a distinct type, DCLGEN generates the host language equivalent of the source data type.
2. If your C compiler does not support the decimal data type, edit your DCLGEN output, and replace the decimal
data declarations with declarations of type double.
3. For a BLOB, CLOB, or DBCLOB data type, DCLGEN generates a LOB locator.
4. DCLGEN chooses the format based on the character you specify as the DBCS symbol on the COBOL Defaults
panel.
5. This declaration is used unless a date installation exit routine exists for formatting dates, in which case the length
is that specified for the LOCAL DATE LENGTH installation option.
6. This declaration is used unless a time installation exit routine exists for formatting times, in which case the length
is that specified for the LOCAL TIME LENGTH installation option.
For more details about the DCLGEN subcommand, see Part 3 of DB2 Command
Reference.
The COBOL Defaults panel is then displayed, as shown in Figure 22. Fill in the
COBOL Defaults panel as necessary. Press Enter to save the new defaults, if any,
and return to the DB2I Primary Option menu.
Figure 22. The COBOL defaults panel. Shown only if the field APPLICATION LANGUAGE on
the DB2I Defaults panel is IBMCOB.
Fill in the fields as shown in Figure 23 on page 126, and then press Enter.
Figure 23. DCLGEN panel—selecting source table and destination data set
DB2 again displays the DCLGEN screen, as shown in Figure 25. Press Enter to
return to the DB2I Primary Option menu.
For each language, this chapter provides unique instructions or details about:
v Defining the SQL communications area
v Defining SQL descriptor areas
v Embedding SQL statements
v Using host variables
v Declaring host variables
v Declaring host variable arrays for C or C++, COBOL, and PL/I
v Determining equivalent SQL data types
v Determining if SQL and host language data types are compatible
v Using indicator variables or host structures, depending on the language
v Handling SQL error return codes
For information about reading the syntax diagrams in this chapter, see “How to read
the syntax diagrams” on page xx.
For information about writing embedded SQL application programs in Java, see
DB2 Application Programming Guide and Reference for Java.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these values to determine whether the last SQL
statement was successful. All SQL statements in the program must be within the
scope of the declaration of the SQLCODE and SQLSTATE variables.
If your program is reentrant, you must include the SQLCA within a unique data area
that is acquired for your task (a DSECT). For example, at the beginning of your
program, specify:
PROGAREA DSECT
EXEC SQL INCLUDE SQLCA
As an alternative, you can create a separate storage area for the SQLCA and
provide addressability to that area.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor, unless you use the precompiler option TWOPASS. See Chapter
5 of DB2 SQL Reference for more information about the INCLUDE statement and
Appendix C of DB2 SQL Reference for a complete description of SQLDA fields.
Each SQL statement in an assembler program must begin with EXEC SQL. The
EXEC and SQL keywords must appear on one line, but the remainder of the
statement can appear on subsequent lines.
| Multiple-row FETCH statements: You can use only the FETCH ... USING
| DESCRIPTOR form of the multiple-row FETCH statement in an assembler program.
| The DB2 precompiler does not recognize declarations of host variable arrays for an
| assembler program.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for assembler statements, except that you must specify
EXEC SQL within one line. Any part of the statement that does not fit on one line
can appear on subsequent lines, beginning at the continuation margin (column 16,
the default). Every line of the statement, except the last, must have a continuation
character (a non-blank character) immediately after the right margin in column 72.
Declaring tables and views: Your assembler program should include a DECLARE
statement to describe each table and view the program accesses.
Margins: The precompiler option MARGINS allows you to set a left margin, a right
margin, and a continuation margin. The default values for these margins are
columns 1, 71, and 16, respectively. If EXEC SQL starts before the specified left
margin, the DB2 precompiler does not recognize the SQL statement. If you use the
default margins, you can place an SQL statement anywhere between columns 2
and 71.
Names: You can use any valid assembler name for a host variable. However, do
not use external entry names or access plan names that begin with ’DSN’ or host
variable names that begin with ’SQL’. These names are reserved for DB2.
The first character of a host variable that is used in embedded SQL cannot be an
underscore. However, you can use an underscore as the first character in a symbol
that is not used in embedded SQL.
Statement labels: You can prefix an SQL statement with a label. The first line of an
SQL statement can use a label beginning in the left margin (column 1). If you do
not use a label, leave column 1 blank.
WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER
statement must be a label in the assembler source code and must be within the
scope of the SQL statements that WHENEVER affects.
CICS
An example of code to support reentrant programs, running under CICS,
follows:
DFHEISTG DSECT
DFHEISTG
EXEC SQL INCLUDE SQLCA
*
DS 0F
SQDWSREG EQU R7
SQDWSTOR DS (SQLDLEN)C RESERVE STORAGE TO BE USED FOR SQLDSECT
.
.
.
TSO
The sample program in prefix.SDSNSAMP(DSNTIAD) contains an example
of how to acquire storage for the SQLDSECT in a program that runs in a
TSO environment.
CICS
A CICS application program uses the DFHEIENT macro to generate the
entry point code. When using this macro, consider the following:
– If you use the default DATAREG in the DFHEIENT macro, register 13
points to the save area.
– If you use any other DATAREG in the DFHEIENT macro, you must
provide addressability to a save area.
For example, to use SAVED, you can code instructions to save, load,
and restore register 13 around each SQL statement as in the following
example.
ST 13,SAVER13 SAVE REGISTER 13
LA 13,SAVED POINT TO SAVE AREA
EXEC SQL . . .
L 13,SAVER13 RESTORE REGISTER 13
You can precede the assembler statements that define host variables with the
statement BEGIN DECLARE SECTION, and follow the assembler statements with
the statement END DECLARE SECTION. You must use the statements BEGIN
DECLARE SECTION and END DECLARE SECTION when you use the precompiler
option STDSQL(YES).
You can declare host variables in normal assembler style (DC or DS), depending on
the data type and the limitations on that data type. You can specify a value on DC
or DS declarations (for example, DC H’5’). The DB2 precompiler examines only
packed decimal declarations.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
Numeric host variables: Figure 27 on page 134 shows the syntax for declarations
of numeric host variables. The numeric value specifies the scale of the packed
decimal variable. If value does not include a decimal point, the scale is 0.
variable-name DC H
DS 1 L2
F
L4
P ’value’
Ln
E
L4
EH
L4
EB
L4
D
L8
DH
L8
DB
L8
For floating-point data types (E, EH, EB, D, DH, and DB), DB2 uses the FLOAT
| precompiler option to determine whether the host variable is in IEEE binary
| floating-point or System/390® hexadecimal floating-point format. If the precompiler
option is FLOAT(S390), you need to define your floating-point host variables as E,
EH, D, or DH. If the precompiler option is FLOAT(IEEE), you need to define your
floating-point host variables as EB or DB. DB2 converts all floating-point input data
| to System/390 hexadecimal floating-point before storing it.
Character host variables: The three valid forms for character host variables are:
v Fixed-length strings
v Varying-length strings
v CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 34
on page 136 for the syntax of CLOBs.
variable-name DC C
DS 1 Ln
variable-name DC H , CLn
DS 1 L2 1
Graphic host variables: The three valid forms for graphic host variables are:
v Fixed-length strings
v Varying-length strings
v DBCLOBs
The following figures show the syntax for forms other than DBCLOBs. See
Figure 34 on page 136 for the syntax of DBCLOBs. In the syntax diagrams, value
denotes one or more DBCS characters, and the symbols < and > represent shift-out
and shift-in characters.
variable-name DC G
DS Ln
’<value>’
Ln’<value>’
variable-name DS H , GLn
DC L2 ’m’ ’<value>’
Result set locators: Figure 32 shows the syntax for declarations of result set
locators. See Chapter 25, “Using stored procedures for client/server processing,” on
page 569 for a discussion of how to use these host variables.
variable-name DC F
DS 1 L4
Table Locators: Figure 33 shows the syntax for declarations of table locators. See
“Accessing transition tables in a user-defined function or stored procedure” on page
328 for a discussion of how to use these host variables.
LOB variables and locators: Figure 34 on page 136 shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
Chapter 14, “Programming for large objects (LOBs),” on page 281 for a discussion
of how to use these host variables.
If you specify the length of the LOB in terms of KB, MB, or GB, you must leave no
spaces between the length and K, M, or G.
ROWIDs: Figure 35 shows the syntax for declarations of ROWID host variables.
See Chapter 14, “Programming for large objects (LOBs),” on page 281 for a
discussion of how to use these host variables.
Table 11. SQL data types the precompiler uses for assembler declarations (continued)
SQLTYPE of SQLLEN of
Assembler data type host variable host variable SQL data type
Notes:
1. m is the number of bytes.
2. n is the number of double-byte characters.
3. This data type cannot be used as a column type.
Table 12 on page 138 helps you define host variables that receive output from the
database. You can use Table 12 on page 138 to determine the assembler data type
that is equivalent to a given SQL data type. For example, if you retrieve
TIMESTAMP data, you can use the table to define a suitable host variable in the
program that receives the data value.
Table 12 on page 138 shows direct conversions between DB2 data types and host
data types. However, a number of DB2 data types are compatible. When you do
assignments or comparisons of data that have compatible data types, DB2 does
conversions between those compatible data types. See Table 1 on page 5 for
information about compatible data types.
Table 12. SQL data types mapped to typical assembler declarations (continued)
SQL data type Assembler equivalent Notes
Table locator SQL TYPE IS Use this data type only in a user-defined
TABLE LIKE function or stored procedure to receive
table-name rows of a transition table. Do not use this
AS LOCATOR data type as a column type.
BLOB locator SQL TYPE IS Use this data type only to manipulate data
BLOB_LOCATOR in BLOB columns. Do not use this data
type as a column type.
CLOB locator SQL TYPE IS Use this data type only to manipulate data
CLOB_LOCATOR in CLOB columns. Do not use this data
type as a column type.
DBCLOB locator SQL TYPE IS Use this data type only to manipulate data
DBCLOB_LOCATOR in DBCLOB columns. Do not use this data
type as a column type.
BLOB(n) SQL TYPE IS 1≤n≤2147483647
BLOB(n)
CLOB(n) SQL TYPE IS 1≤n≤2147483647
CLOB(n)
DBCLOB(n) SQL TYPE IS n is the number of double-byte characters.
DBCLOB(n) 1≤n≤1073741823
ROWID SQL TYPE IS ROWID
Notes:
1. IEEE floating-point host variables are not supported in user-defined functions and stored
procedures.
Host graphic data type: You can use the assembler data type “host graphic” in
SQL statements when the precompiler option GRAPHIC is in effect. However, you
cannot use assembler DBCS literals in SQL statements, even when GRAPHIC is in
effect.
Special purpose assembler data types: The locator data types are assembler
language data types and SQL data types. You cannot use locators as column types.
For information about how to use these data types, see the following sections:
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 75 or Chapter 2 of DB2 SQL Reference.
Example: The following example shows a FETCH statement with the declarations
of the host variables that are needed for the FETCH statement:
EXEC SQL FETCH CLS_CURSOR INTO :CLSCD, X
:DAY :DAYIND, X
:BGN :BGNIND, X
:END :ENDIND
variable-name DC H
DS 1 L2
| You can also use the MESSAGE_TEXT condition item field of the GET
| DIAGNOSTICS statement to convert an SQL return code into a text message.
| Programs that require long token message support should code the GET
| DIAGNOSTICS statement instead of DSNTIAR. For more information about GET
| DIAGNOSTICS, see “Using the GET DIAGNOSTICS statement” on page 84.
DSNTIAR syntax
CALL DSNTIAR,(sqlca, message, lrecl),MF=(E,PARM)
where MESSAGE is the name of the message output area, LINES is the
number of lines in the message output area, and LRECL is the length of each
line.
lrecl
A fullword containing the logical record length of output messages, between 72
and 240.
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC,(eib,commarea,sqlca,msg,lrecl),MF=(E,PARM)
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see member DSN8FRDO in the data set
prefix.SDSNSAMP.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are also in the data set
prefix.SDSNSAMP.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these values to determine whether the last SQL
statement was successful. All SQL statements in the program must be within the
scope of the declaration of the SQLCODE and SQLSTATE variables.
A standard declaration includes both a structure definition and a static data area
named ’sqlca’. See Chapter 5 of DB2 SQL Reference for more information about
the INCLUDE statement and Appendix C of DB2 SQL Reference for a complete
description of SQLCA fields.
Unlike the SQLCA, more than one SQLDA can exist in a program, and an SQLDA
can have any valid name. You can code an SQLDA in a C program, either directly
or by using the SQL INCLUDE statement. The SQL INCLUDE statement requests a
standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA;
A standard declaration includes only a structure definition with the name ’sqlda’.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLDA fields.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor, unless you use the precompiler option TWOPASS. You can
place an SQLDA declaration wherever C allows a structure definition. Normal C
scoping rules apply.
Each SQL statement in a C program must begin with EXEC SQL and end with a
semicolon (;). The EXEC and SQL keywords must appear on one line, but the
remainder of the statement can appear on subsequent lines.
In general, because C is case sensitive, use uppercase letters to enter all SQL
keywords. However, if you use the FOLD precompiler suboption, DB2 folds
lowercase letters in SBCS SQL ordinary identifiers to uppercase. For information
about host language precompiler options, see Table 63 on page 462.
You must keep the case of host variable names consistent throughout the program.
For example, if a host variable name is lowercase in its declaration, it must be
lowercase in all SQL statements. You might code an UPDATE statement in a C
program as follows:
EXEC SQL
UPDATE DSN8810.DEPT
SET MGRNO = :mgr_num
WHERE DEPTNO = :int_dept;
Comments: You can include C comments (/* ... */) within SQL statements wherever
you can use a blank, except between the keywords EXEC and SQL. You can use
single-line comments (starting with //) in C language statements, but not in
embedded SQL. You cannot nest comments.
Declaring tables and views: Your C program should use the DECLARE TABLE
statement to describe each table and view the program accesses. You can use the
DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. For more information, see Chapter 8, “Generating declarations for your
tables using DCLGEN,” on page 121.
You cannot nest SQL INCLUDE statements. Do not use C #include statements to
include SQL statements or C host variable declarations.
Margins: Code SQL statements in columns 1 through 72, unless you specify other
margins to the DB2 precompiler. If EXEC SQL is not within the specified margins,
the DB2 precompiler does not recognize the SQL statement.
Names: You can use any valid C name for a host variable, subject to the following
restrictions:
Nulls and NULs: C and SQL differ in the way they use the word null. The C
language has a null character (NUL), a null pointer (NULL), and a null statement
(just a semicolon). The C NUL is a single character that compares equal to 0. The
C NULL is a special reserved pointer value that does not point to any valid data
object. The SQL null value is a special value that is distinct from all non-null values
and denotes the absence of a (nonnull) value. In this chapter, NUL is the null
character in C and NULL is the SQL null value.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
Trigraph characters: Some characters from the C character set are not available
on all keyboards. You can enter these characters into a C source program using a
sequence of three characters called a trigraph. The trigraph characters that DB2
supports are the same as those that the C compiler supports.
WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER
statement must be within the scope of any SQL statements that the statement
WHENEVER affects.
Special C considerations:
v Using the C/370™ multi-tasking facility, in which multiple tasks execute SQL
statements, causes unpredictable results.
v You must run the DB2 precompiler before running the C preprocessor.
v The DB2 precompiler does not support C preprocessor directives.
v If you use conditional compiler directives that contain C code, either place them
after the first C token in your application program, or include them in the C
program using the #include preprocessor directive.
Refer to the appropriate C documentation for more information about C
preprocessor directives.
| Precede C statements that define the host variables and host variable arrays with
| the BEGIN DECLARE SECTION statement, and follow the C statements with the
| END DECLARE SECTION statement. You can have more than one host variable
| declaration section in your program.
| A colon (:) must precede all host variables and all host variable arrays in an SQL
| statement.
| The names of host variables and host variable arrays must be unique within the
| program, even if the variables and variable arrays are in different blocks, classes, or
| procedures. You can qualify the names with a structure name to make them unique.
| An SQL statement that uses a host variable or host variable array must be within
| the scope of the statement that declares that variable or array. You define host
| variable arrays for use with multiple-row FETCH and INSERT statements.
Numeric host variables: Figure 37 shows the syntax for declarations of numeric
host variables.
| float
auto const double
extern volatile int
static short
sqlint32
int
long
int
long long
decimal ( integer )
, integer
| ,
variable-name ;
*pointer-name =expression
| Notes:
| 1. The SQL statement coprocessor is required if you use a pointer as a host
| variable.
Character host variables: The four valid forms for character host variables are:
v Single-character form
v NUL-terminated character form
v VARCHAR structured form
v CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 46
on page 153 for the syntax of CLOBs.
Figure 38 on page 148 shows the syntax for declarations of single-character host
variables.
| ,
char variable-name ;
auto const unsigned *pointer-name =expression
extern volatile
static
| Notes:
| 1. The SQL statement coprocessor is required if you use a pointer as a host
| variable.
char
auto const unsigned
extern volatile
static
| ,
variable-name [ length ] ;
*pointer-name =expression
Notes:
1. On input, the string contained by the variable must be NUL-terminated.
2. On output, the string is NUL-terminated.
3. A NUL-terminated character host variable maps to a varying-length character
string (except for the NUL).
| 4. The SQL statement coprocessor is required if you use a pointer as a host
| variable.
int
struct { short var-1 ;
auto const tag
extern volatile
static
| ,
variable-name ;
*pointer-name ={expression, expression}
Notes:
1. var-1 and var-2 must be simple variable references. You cannot use them as
host variables.
2. You can use the struct tag to define other data areas that you cannot use as
host variables.
| 3. The SQL statement coprocessor is required if you use a pointer as a host
| variable.
Example: The following examples show valid and invalid declarations of the
VARCHAR structured form:
EXEC SQL BEGIN DECLARE SECTION;
Graphic host variables: The four valid forms for graphic host variables are:
v Single-graphic form
v NUL-terminated graphic form
v VARGRAPHIC structured form.
v DBCLOBs
| You can use the C data type sqldbchar to define a host variable that inserts,
updates, deletes, and selects data from GRAPHIC or VARGRAPHIC columns.
The following figures show the syntax for forms other than DBCLOBs. See
Figure 46 on page 153 for the syntax of DBCLOBs.
Figure 41 on page 150 shows the syntax for declarations of single-graphic host
variables.
| ,
sqldbchar variable-name ;
auto const *pointer-name =expression
extern volatile
static
| Notes:
| 1. The SQL statement coprocessor is required if you use a pointer as a host
| variable.
| ,
Notes:
1. length must be a decimal integer constant greater than 1 and not greater than
16352.
2. On input, the string in variable-name must be NUL-terminated.
3. On output, the string is NUL-terminated.
4. The NUL-terminated graphic form does not accept single-byte characters into
variable-name.
| 5. The SQL statement coprocessor is required if you use a pointer as a host
| variable.
Figure 43 on page 151 shows the syntax for declarations of graphic host variables
that use the VARGRAPHIC structured form.
int
struct { short var-1 ;
auto const tag
extern volatile
static
| ,
;
Notes:
1. length must be a decimal integer constant greater than 1 and not greater than
16352.
2. var-1 must be less than or equal to length.
3. var-1 and var-2 must be simple variable references. You cannot use them as
host variables.
4. You can use the struct tag to define other data areas that you cannot use as
host variables.
| 5. The SQL statement coprocessor is required if you use a pointer as a host
| variable.
Example: The following examples show valid and invalid declarations of graphic
host variables that use the VARGRAPHIC structured form:
EXEC SQL BEGIN DECLARE SECTION;
Result set locators: Figure 44 on page 152 shows the syntax for declarations of
result set locators. See Chapter 25, “Using stored procedures for client/server
processing,” on page 569 for a discussion of how to use these host variables.
| ,
variable-name ;
*pointer-name = init-value
Table Locators: Figure 45 shows the syntax for declarations of table locators. See
“Accessing transition tables in a user-defined function or stored procedure” on page
328 for a discussion of how to use these host variables.
| ,
variable-name ;
*pointer-name init-value
LOB Variables and Locators: Figure 46 on page 153 shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
Chapter 14, “Programming for large objects (LOBs),” on page 281 for a discussion
of how to use these host variables.
SQL TYPE IS
auto const
extern volatile
static
register
| ,
;
ROWIDs: Figure 47 shows the syntax for declarations of ROWID host variables.
See Chapter 14, “Programming for large objects (LOBs),” on page 281 for a
discussion of how to use these host variables.
| Numeric host variable arrays: Figure 48 on page 154 shows the syntax for
| declarations of numeric host variable arrays.
|
| float
auto const unsigned double
extern volatile int
static long
short
int
long long
decimal ( integer )
, integer
variable-name [ dimension ] ;
,
= { expression }
| Note:
| 1. dimension must be an integer constant between 1 and 32767.
| Character host variable arrays: The three valid forms for character host variable
| arrays are:
| v NUL-terminated character form
| v VARCHAR structured form
| v CLOBs
| The following figures show the syntax for forms other than CLOBs. See Figure 53
| on page 158 for the syntax of CLOBs.
char
auto const unsigned
extern volatile
static
= { expression }
| Notes:
| 1. On input, the strings contained in the variable arrays must be NUL-terminated.
| 2. On output, the strings are NUL-terminated.
| 3. The strings in a NUL-terminated character host variable array map to
| varying-length character strings (except for the NUL).
| 4. dimension must be an integer constant between 1 and 32767.
int
struct { short var-1 ;
auto const
extern volatile
static
variable-name [ dimension ] ;
,
= { expression }
| Notes:
| 1. var-1 must be a simple variable reference, and var-2 must be a variable array
| reference.
| 2. You can use the struct tag to define other data areas, which you cannot use as
| host variable arrays.
| 3. dimension must be an integer constant between 1 and 32767.
| Example: The following examples show valid and invalid declarations of VARCHAR
| host variable arrays:
| EXEC SQL BEGIN DECLARE SECTION;
| /* valid declaration of VARCHAR host variable array */
| struct VARCHAR {
| short len;
| char s[18];
| } name[10];
|
| /* invalid declaration of VARCHAR host variable array */
| struct VARCHAR name[10];
| Graphic host variable arrays: The two valid forms for graphic host variable arrays
| are:
| v NUL-terminated graphic form
| v VARGRAPHIC structured form.
| You can use the C data type sqldbchar to define a host variable array that inserts,
| updates, deletes, and selects data from GRAPHIC or VARGRAPHIC columns.
| Figure 51 shows the syntax for declarations of NUL-terminated graphic host variable
| arrays.
|
sqldbchar
auto const unsigned
extern volatile
static
= { expression }
| Notes:
| 1. length must be a decimal integer constant greater than 1 and not greater than
| 16352.
| 2. On input, the strings contained in the variable arrays must be NUL-terminated.
| 3. On output, the string is NUL-terminated.
| 4. The NUL-terminated graphic form does not accept single-byte characters into
| the variable array.
| 5. dimension must be an integer constant between 1 and 32767.
| Figure 52 on page 157 shows the syntax for declarations of graphic host variable
| arrays that use the VARGRAPHIC structured form.
|
int
struct { short var-1 ;
auto const
extern volatile
static
variable-name [ dimension ] ;
,
= { expression }
| Notes:
| 1. length must be a decimal integer constant greater than 1 and not greater than
| 16352.
| 2. var-1 must be a simple variable reference, and var-2 must be a variable array
| reference.
| 3. You can use the struct tag to define other data areas, which you cannot use as
| host variable arrays.
| 4. dimension must be an integer constant between 1 and 32767.
| Example: The following examples show valid and invalid declarations of graphic
| host variable arrays that use the VARGRAPHIC structured form:
| EXEC SQL BEGIN DECLARE SECTION;
| /* valid declaration of host variable array vgraph */
| struct VARGRAPH {
| short len;
| sqldbchar d[10];
| } vgraph[20];
|
| /* invalid declaration of host variable array vgraph */
| struct VARGRAPH vgraph[20];
| LOB variable arrays and locators: Figure 53 on page 158 shows the syntax for
| declarations of BLOB, CLOB, and DBCLOB host variable arrays and locators. See
| Chapter 14, “Programming for large objects (LOBs),” on page 281 for a discussion
| of how to use LOB variables.
|
SQL TYPE IS
auto const
extern volatile
static
register
variable-name [ dimension ] ;
,
= { expression }
| Note:
| 1. dimension must be an integer constant between 1 and 32767.
| ROWIDs: Figure 54 shows the syntax for declarations of ROWID variable arrays.
| See Chapter 14, “Programming for large objects (LOBs),” on page 281 for a
| discussion of how to use these host variable arrays.
|
| Note:
| 1. dimension must be an integer constant between 1 and 32767.
In this example, target is the name of a host structure consisting of the c1, c2, and
c3 fields. c1 and c3 are character arrays, and c2 is the host variable equivalent to
the SQL VARCHAR data type. The target host structure can be part of another host
structure but must be the deepest level of the nested structure.
struct {
auto const packed tag
extern volatile
static
float var-1 ; }
double
int
short
sqlint32
int
long
int
long long
decimal ( integer )
, integer
varchar structure
vargraphic structure
SQL TYPE IS ROWID
LOB data type
char var-2 ;
unsigned [ length ]
sqldbchar var-5 ;
[ length ]
variable-name ;
=expression
Figure 56 on page 160 shows the syntax for VARCHAR structures that are used
within declarations of host structures.
int
struct { short var-3 ;
tag signed
Figure 57 shows the syntax for VARGRAPHIC structures that are used within
declarations of host structures.
| int
struct { short var-6 ; sqldbchar var-7 [ length ] ; }
tag signed
Figure 58 shows the syntax for LOB data types that are used within declarations of
host structures.
Table 13. SQL data types the precompiler uses for C declarations (continued)
SQLTYPE of host SQLLEN of host
C data type variable variable SQL data type
float 480 4 FLOAT (single
precision)
double 480 8 FLOAT (double
precision)
Single-character form 452 1 CHAR(1)
NUL-terminated 460 n VARCHAR (n-1)
character form
VARCHAR structured 448 n VARCHAR(n)
form 1<=n<=255
VARCHAR structured 456 n VARCHAR(n)
form
n>255
Single-graphic form 468 1 GRAPHIC(1)
NUL-terminated 400 n VARGRAPHIC (n-1)
graphic form
| (sqldbchar)
VARGRAPHIC 464 n VARGRAPHIC(n)
structured form
1<=n<128
VARGRAPHIC 472 n VARGRAPHIC(n)
structured form
n>127
SQL TYPE IS 972 4 Result set locator2
RESULT_SET
_LOCATOR
Table 13. SQL data types the precompiler uses for C declarations (continued)
SQLTYPE of host SQLLEN of host
C data type variable variable SQL data type
Notes:
1. p is the precision; in SQL terminology, this the total number of digits. In C, this is called
the size.
s is the scale; in SQL terminology, this is the number of digits to the right of the decimal
point. In C, this is called the precision.
| C++ does not support the decimal data type.
2. Do not use this data type as a column type.
3. n is the number of double-byte characters.
4. No exact equivalent. Use DECIMAL(19,0).
Table 14 helps you define host variables that receive output from the database. You
can use the table to determine the C data type that is equivalent to a given SQL
data type. For example, if you retrieve TIMESTAMP data, you can use the table to
define a suitable host variable in the program that receives the data value.
Table 14 shows direct conversions between DB2 data types and host data types.
However, a number of DB2 data types are compatible. When you do assignments
or comparisons of data that have compatible data types, DB2 does conversions
between those compatible data types. See Table 1 on page 5 for information about
compatible data types.
Table 14. SQL data types mapped to typical C declarations
SQL data type C data type Notes
SMALLINT short int
INTEGER long int
DECIMAL(p,s) or decimal You can use the double data type if your C
NUMERIC(p,s) compiler does not have a decimal data
type; however, double is not an exact
equivalent.
REAL or FLOAT(n) float 1<=n<=21
DOUBLE PRECISION or double 22<=n<=53
FLOAT(n)
CHAR(1) single-character form
CHAR(n) no exact equivalent If n>1, use NUL-terminated character form
VARCHAR(n) NUL-terminated character form If data can contain character NULs (\0),
use VARCHAR structured form. Allow at
least n+1 to accommodate the
NUL-terminator.
VARCHAR structured form
GRAPHIC(1) single-graphic form
GRAPHIC(n) no exact equivalent If n>1, use NUL-terminated graphic form. n
is the number of double-byte characters.
C data types with no SQL equivalent: C supports some data types and storage
| classes with no SQL equivalents, for example, register storage class, typedef, long
| long, and the pointer.
SQL data types with no C equivalent: If your C compiler does not have a decimal
data type, no exact equivalent exists for the SQL DECIMAL data type. In this case,
to hold the value of such a variable, you can use:
v An integer or floating-point variable, which converts the value. If you choose
integer, you will lose the fractional part of the number. If the decimal number can
exceed the maximum value for an integer, or if you want to preserve a fractional
value, you can use floating-point numbers. Floating-point numbers are
approximations of real numbers. Therefore, when you assign a decimal number
to a floating-point variable, the result might be different from the original number.
v A character-string host variable. Use the CHAR function to get a string
representation of a decimal number.
v The DECIMAL function to explicitly convert a value to a decimal data type, as in
this example:
long duration=10100; /* 1 year and 1 month */
char result_dt[11];
Special Purpose C Data Types: The locator data types are C data types and SQL
data types. You cannot use locators as column types. For information about how to
use these data types, see the following sections:
PREPARE or DESCRIBE statements: You cannot use a host variable that is of the
| NUL-terminated form in either a PREPARE or DESCRIBE statement when you use
| the DB2 precompiler. However, if you use the SQL statement coprocessor for either
| C or C++, you can use host variables of the NUL-terminated form in PREPARE,
| DESCRIBE, and EXECUTE IMMEDIATE statements.
Truncation: Be careful of truncation. Ensure that the host variable you declare can
contain the data and a NUL terminator, if needed. Retrieving a floating-point or
decimal column value into a long integer host variable removes any fractional part
of the value.
In SQL, you can use double quotes to delimit identifiers and apostrophes to delimit
string constants. The following examples illustrate the use of apostrophes and
quotes in SQL.
Quotes
SELECT "COL#1" FROM TBL1;
Apostrophes
SELECT COL1 FROM TBL1 WHERE COL2 = ’BELL’;
Character data in SQL is distinct from integer data. Character data in C is a subtype
of integer data.
Varying-length strings: For varying-length BIT data, use the VARCHAR structured
form. Some C string manipulation functions process NUL-terminated strings and
other functions process strings that are not NUL-terminated. The C string
manipulation functions that process NUL-terminated strings cannot handle bit data
because these functions might misinterpret a NUL character to be a NUL-terminator.
Using indicator variables: If you provide an indicator variable for the variable X,
when DB2 retrieves a null value for X, it puts a negative value in the indicator
variable and does not update X. Your program should check the indicator variable
before using X. If the indicator variable is negative, you know that X is null and any
value you find in X is irrelevant.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X. For more information about indicator variables,
see “Using indicator variables with host variables” on page 75.
| Using indicator variable arrays: When you retrieve data into a host variable array,
| if a value in its indicator array is negative, you can disregard the contents of the
| corresponding element in the host variable array. For more information about
| indicator variable arrays, see “Using indicator variable arrays with host variable
| arrays” on page 79.
Declaring indicator variables: You declare indicator variables in the same way as
host variables. You can mix the declarations of the two types of variables in any
way that seems appropriate.
Example: The following example shows a FETCH statement with the declarations
of the host variables that are needed for the FETCH statement:
EXEC SQL FETCH CLS_CURSOR INTO :ClsCd,
:Day :DayInd,
:Bgn :BgnInd,
:End :EndInd;
,
int
short variable-name ;
auto const signed = expression
extern volatile
static
Declaring indicator variable arrays: Figure 60 shows the syntax for declarations
| of an indicator array or a host structure indicator array.
int
short
auto const signed
extern volatile
static
variable-name [ dimension ] ;
= expression
| You can also use the MESSAGE_TEXT condition item field of the GET
| DIAGNOSTICS statement to convert an SQL return code into a text message.
| Programs that require long token message support should code the GET
| DIAGNOSTICS statement instead of DSNTIAR. For more information about GET
| DIAGNOSTICS, see “Using the GET DIAGNOSTICS statement” on page 84.
DSNTIAR syntax
rc = dsntiar(&sqlca, &message, &lrecl);
where error_message is the name of the message output area, data_dim is the
number of lines in the message output area, and data_len is the length of each
line.
&lrecl
A fullword containing the logical record length of output messages, between 72
and 240.
For C, include:
#pragma linkage (dsntiar,OS)
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
rc = DSNTIAC(&eib, &commarea, &sqlca, &message, &lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
&eib EXEC interface block
&commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
| Declaring host variable arrays: For both C and C++, you cannot specify the
| _packed attribute on the structure declarations for varying-length character arrays,
| varying-length graphic arrays, or LOB arrays that are to be used in multiple-row
| INSERT and FETCH statements. In addition, the #pragma pack(1) directive cannot
| be in effect if you plan to use these arrays in multiple-row statements.
Except where noted otherwise, this information pertains to all COBOL compilers
supported by DB2 UDB for z/OS.
v An SQLCODE variable declared as PIC S9(9) BINARY, PIC S9(9) COMP-4, PIC
S9(9) COMP-5, or PICTURE S9(9) COMP
v An SQLSTATE variable declared as PICTURE X(5)
Alternatively, you can include an SQLCA, which contains the SQLCODE and
SQLSTATE variables.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these values to determine whether the last SQL
statement was successful. All SQL statements in the program must be within the
scope of the declaration of the SQLCODE and SQLSTATE variables.
When you use the precompiler option STDSQL(YES), you must declare an
SQLCODE variable. DB2 declares an SQLCA area for you in the
WORKING-STORAGE SECTION. DB2 controls the structure and location of the
SQLCA.
You can specify INCLUDE SQLCA or a declaration for SQLCODE wherever you
can specify a 77 level or a record description entry in the WORKING-STORAGE
SECTION. You can declare a stand-alone SQLCODE variable in either the
WORKING-STORAGE SECTION or LINKAGE SECTION.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
Unlike the SQLCA, a program can have more than one SQLDA, and an SQLDA
can have any valid name. The SQL INCLUDE statement does not provide an
SQLDA mapping for COBOL. You can define the SQLDA using one of the following
two methods:
v For COBOL programs compiled with any compiler except the OS/VS COBOL
compiler, you can code the SQLDA declarations in your program. For more
information, see “Using dynamic SQL in COBOL” on page 568. You must place
SQLDA declarations in the WORKING-STORAGE SECTION or LINKAGE
SECTION of your program, wherever you can specify a record description entry
in that section.
v For COBOL programs compiled with any compiler, you can call a subroutine
(written in C, PL/I, or assembler language) that uses the INCLUDE SQLDA
statement to define the SQLDA. The subroutine can also include SQL statements
for any dynamic SQL functions you need. For more information on using dynamic
SQL, see Chapter 24, “Coding dynamic SQL in application programs,” on page
535.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor. An SQL statement that uses a host variable must be within the
scope of the statement that declares the variable.
Notes:
1. When including host variable declarations, the INCLUDE statement must be in the
WORKING-STORAGE SECTION or the LINKAGE SECTION.
Each SQL statement in a COBOL program must begin with EXEC SQL and end
with END-EXEC. If the SQL statement appears between two COBOL statements,
the period is optional and might not be appropriate. If the statement appears in an
IF...THEN set of COBOL statements, omit the ending period to avoid inadvertently
ending the IF statement. The EXEC and SQL keywords must appear on one line,
but the remainder of the statement can appear on subsequent lines.
EXEC SQL
UPDATE DSN8810.DEPT
SET MGRNO = :MGR-NUM
WHERE DEPTNO = :INT-DEPT
END-EXEC.
| In addition, you can include SQL comments in any embedded SQL statement.
Continuation for SQL statements: The rules for continuing a character string
constant from one line to the next in an SQL statement embedded in a COBOL
program are the same as those for continuing a non-numeric literal in COBOL.
However, you can use either a quote or an apostrophe as the first nonblank
character in area B of the continuation line. The same rule applies for the
continuation of delimited identifiers and does not depend on the string delimiter
option.
Declaring tables and views: Your COBOL program should include the statement
DECLARE TABLE to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. You should include the DCLGEN members in the DATA DIVISION. For
more information, see Chapter 8, “Generating declarations for your tables using
DCLGEN,” on page 121.
If you are using the DB2 precompiler, you cannot nest SQL INCLUDE statements.
In this case, do not use COBOL verbs to include SQL statements or host variable
declarations, and do not use the SQL INCLUDE statement to include CICS
preprocessor related code. In general, if you are using the DB2 precompiler, use
the SQL INCLUDE statement only for SQL-related coding. If you are using the
COBOL SQL coprocessor, none of these restrictions apply.
Margins: You must code EXEC SQL in columns 12 through 72, otherwise the DB2
precompiler does not recognize the SQL statement. Continued lines of a SQL
statement can be in columns 8 through 72.
Names: You can use any valid COBOL name for a host variable. Do not use
external entry names or access plan names that begin with ’DSN’, and do not use
host variable names that begin with ’SQL’. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
WHENEVER statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a section name or unqualified paragraph name in the
PROCEDURE DIVISION.
Because stored procedures use CAF, you must also compile COBOL stored
procedures with the option NODYNAM.
v If a COBOL program contains several entry points or is called several times, the
USING clause of the entry statement that executes before the first SQL
statement executes must contain the SQLCA and all linkage section entries that
any SQL statement uses as host variables.
| v If you use the DB2 precompiler, the REPLACE statement has no effect on SQL
| statements. It affects only the COBOL statements that the precompiler generates.
| If you use the SQL statement coprocessor, the REPLACE statement replaces
| text strings in SQL statements as well as in generated COBOL statements.
| v If you use the DB2 precompiler, no compiler directives should appear between
| the PROCEDURE DIVISION and the DECLARATIVES statement.
v Do not use COBOL figurative constants (such as ZERO and SPACE), symbolic
characters, reference modification, and subscripts within SQL statements.
v Observe the rules in Chapter 2 of DB2 SQL Reference when you name SQL
identifiers. However, for COBOL only, the names of SQL identifiers can follow the
rules for naming COBOL words, if the names do not exceed the allowable length
for the DB2 object. For example, the name 1ST-TIME is a valid cursor name
because it is a valid COBOL word, but the name 1ST_TIME is not valid because
it is not a valid SQL identifier or a valid COBOL word.
v Observe these rules for hyphens:
– Surround hyphens used as subtraction operators with spaces. DB2 usually
interprets a hyphen with no spaces around it as part of a host variable name.
– You can use hyphens in SQL identifiers under either of the following
circumstances:
- The application program is a local application that runs on DB2 UDB for
OS/390 Version 6 or later.
- The application program accesses remote sites, and the local site and
remote sites are DB2 UDB for OS/390 Version 6 or later.
v If you include an SQL statement in a COBOL PERFORM ... THRU paragraph and
also specify the SQL statement WHENEVER ... GO, the COBOL compiler returns
the warning message IGYOP3094. That message might indicate a problem. This
usage is not recommended.
v If you are using the DB2 precompiler and VS COBOL II or later (with the
compiler option NOCMPR2), the following additional restrictions apply:
– All SQL statements and any host variables they reference must be within the
first program when using nested programs or batch compilation.
– DB2 COBOL programs must have a DATA DIVISION and a PROCEDURE
DIVISION. Both divisions and the WORKING-STORAGE section must be
present in programs that contain SQL statements.
If you pass host variables with address changes into a program more than once,
the called program must reset SQL-INIT-FLAG. Resetting this flag indicates that the
storage must initialize when the next SQL statement executes. To reset the flag,
insert the statement MOVE ZERO TO SQL-INIT-FLAG in the called program’s
PROCEDURE DIVISION, ahead of any executable SQL statements that use the
host variables.
If you use the COBOL SQL statement coprocessor, the called program does not
need to reset SQL-INIT-FLAG.
| program’s DATA DIVISION. You must explicitly declare each host variable and host
| variable array before using them in an SQL statement.
| You can precede COBOL statements that define the host variables and host
| variable arrays with the statement BEGIN DECLARE SECTION, and follow the
| statements with the statement END DECLARE SECTION. You must use the
| statements BEGIN DECLARE SECTION and END DECLARE SECTION when you
| use the precompiler option STDSQL(YES).
| A colon (:) must precede all host variables and all host variable arrays in an SQL
| statement.
| The names of host variables and host variable arrays should be unique within the
| source data set or member, even if the variables and variable arrays are in different
| blocks, classes, or procedures. You can qualify the names with a structure name to
| make them unique.
| An SQL statement that uses a host variable or host variable array must be within
| the scope of the statement that declares that variable or array. You define host
| variable arrays for use with multiple-row FETCH and INSERT statements.
| You can specify OCCURS when defining an indicator structure, a host variable
| array, or an indicator variable array. You cannot specify OCCURS for any other type
| of host variable.
Numeric host variables: The three valid forms of numeric host variables are:
v Floating-point numbers
v Integers and small integers
v Decimal numbers
Figure 61 shows the syntax for declarations of floating-point or real host variables.
01 variable-name COMPUTATIONAL-1
77 IS COMP-1
level-1 USAGE COMPUTATIONAL-2
COMP-2
.
IS
VALUE numeric-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. COMPUTATIONAL-1 and COMP-1 are equivalent.
3. COMPUTATIONAL-2 and COMP-2 are equivalent.
Figure 62 shows the syntax for declarations of integer and small integer host
variables.
IS
01 variable-name PICTURE S9(4)
77 PIC S9999 IS
level-1 S9(9) USAGE
S999999999
BINARY .
COMPUTATIONAL-4 IS
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. The COBOL binary integer data types BINARY, COMPUTATIONAL, COMP,
COMPUTATIONAL-4, and COMP-4 are equivalent.
3. COMPUTATIONAL-5 (and COMP-5) are equivalent to the other COBOL binary
integer data types if you compile the other data types with TRUNC(BIN).
4. Any specification for scale is ignored.
IS
01 variable-name PICTURE picture-string
77 PIC IS
level-1 USAGE
PACKED-DECIMAL
COMPUTATIONAL-3 IS
COMP-3 VALUE numeric-constant
IS CHARACTER
DISPLAY SIGN LEADING SEPARATE
.
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The
picture-string that is that is associated with these types must have the form
S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9) or S9(i)V.
3. The picture-string that is associated with SIGN LEADING SEPARATE must have
the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V with i
instances of 9).
Character host variables: The three valid forms of character host variables are:
v Fixed-length strings
v Varying-length strings
v CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 70
on page 182 for the syntax of CLOBs.
Figure 64 shows the syntax for declarations of fixed-length character host variables.
IS
01 variable-name PICTURE picture-string
77 PIC
level-1
.
DISPLAY IS
IS VALUE character-constant
USAGE
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
| 2. The picture-string that is associated with these forms must be X(m) (or XX...X,
| with m instances of X), with 1 <= m <= 32767 for fixed-length strings. However,
| the maximum length of the CHAR data type (fixed-length character string) in
| DB2 is 255 bytes.
01 variable-name .
level-1
IS
49 var-1 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
IS
VALUE numeric-constant
IS
49 var-2 PICTURE picture-string
PIC DISPLAY
IS
USAGE
.
IS
VALUE character-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with
TRUNC(STD) only recognizes values up to 9999. This can cause data
truncation errors when COBOL statements execute and might effectively limit
the maximum length of variable-length character strings to 9999. Consider using
the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation.
3. For fixed-length strings, the picture-string must be X(m) (or XX...X, with m
instances of X), with 1 <= m <= 32767; for other strings, m cannot be greater
than the maximum size of a varying-length character string.
4. You cannot directly reference var-1 and var-2 as host variables.
5. You cannot use an intervening REDEFINE at level 49.
Graphic character host variables: The three valid forms for graphic character host
variables are:
v Fixed-length strings
v Varying-length strings
v DBCLOBs
The following figures show the syntax for forms other than DBCLOBs. See
Figure 70 on page 182 for the syntax of DBCLOBs.
Figure 66 shows the syntax for declarations of fixed-length graphic host variables.
IS
01 variable-name PICTURE picture-string
77 PIC
level-1
DISPLAY-1 .
IS NATIONAL IS
USAGE VALUE graphic-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of
GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater
than the maximum size of a varying-length graphic string.
3. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for
USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is
supported only through the SQL statement coprocessor.
Figure 67 on page 181 shows the syntax for declarations of varying-length graphic
host variables.
01 variable-name .
level-1
IS
49 var-1 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
IS
VALUE numeric-constant
IS
49 var-2 PICTURE picture-string DISPLAY-1
PIC IS NATIONAL
USAGE
.
IS
VALUE graphic-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with
TRUNC(STD) only recognizes values up to 9999. This can cause data
truncation errors when COBOL statements execute and might effectively limit
the maximum length of variable-length character strings to 9999. Consider using
the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation.
3. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of
GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater
than the maximum size of a varying-length graphic string.
4. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for
USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is
supported only through the SQL statement coprocessor.
5. You cannot directly reference var-1 and var-2 as host variables.
Result set locators: Figure 68 on page 182 shows the syntax for declarations of
result set locators. See Chapter 25, “Using stored procedures for client/server
processing,” on page 569 for a discussion of how to use these host variables.
Table Locators: Figure 69 shows the syntax for declarations of table locators. See
“Accessing transition tables in a user-defined function or stored procedure” on page
328 for a discussion of how to use these host variables.
LOB Variables and Locators: Figure 70 shows the syntax for declarations of
BLOB, CLOB, and DBCLOB host variables and locators. See Chapter 14,
“Programming for large objects (LOBs),” on page 281 for a discussion of how to
use these host variables.
ROWIDs: Figure 71 shows the syntax for declarations of ROWID host variables.
See Chapter 14, “Programming for large objects (LOBs),” on page 281 for a
discussion of how to use these host variables.
| Numeric host variable arrays: The three valid forms of numeric host variable
| arrays are:
| v Floating-point numbers
| v Integers and small integers
| v Decimal numbers
| Figure 72 shows the syntax for declarations of floating-point host variable arrays.
|
.
IS
VALUE numeric-constant
| Notes:
| 1. level-1 indicates a COBOL level between 2 and 48.
| 2. COMPUTATIONAL-1 and COMP-1 are equivalent.
| 3. COMPUTATIONAL-2 and COMP-2 are equivalent.
| 4. dimension must be an integer constant between 1 and 32767.
| Figure 73 on page 184 shows the syntax for declarations of integer and small
| integer host variable arrays.
|
IS
level-1 variable-name PICTURE S9(4)
PIC S9999 IS
S9(9) USAGE
S999999999
| Notes:
| 1. level-1 indicates a COBOL level between 2 and 48.
| 2. The COBOL binary integer data types BINARY, COMPUTATIONAL, COMP,
| COMPUTATIONAL-4, and COMP-4 are equivalent.
| 3. COMPUTATIONAL-5 (and COMP-5) are equivalent to the other COBOL binary
| integer data types if you compile the other data types with TRUNC(BIN).
| 4. Any specification for scale is ignored.
| 5. dimension must be an integer constant between 1 and 32767.
| Figure 74 shows the syntax for declarations of decimal host variable arrays.
|
IS
level-1 variable-name PICTURE picture-string
PIC IS
USAGE
PACKED-DECIMAL
COMPUTATIONAL-3
COMP-3
IS CHARACTER
DISPLAY SIGN LEADING SEPARATE
OCCURS dimension .
TIMES IS
VALUE numeric-constant
| Notes:
| 1. level-1 indicates a COBOL level between 2 and 48.
| 2. PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The
| picture-string that is associated with these types must have the form S9(i)V9(d)
| (or S9...9V9...9, with i and d instances of 9) or S9(i)V.
| 3. The picture-string that is associated with SIGN LEADING SEPARATE must have
| the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V with i
| instances of 9).
| 4. dimension must be an integer constant between 1 and 32767.
| Character host variable arrays: The three valid forms of character host variable
| arrays are:
| v Fixed-length character strings
| v Varying-length character strings
| v CLOBs
| The following figures show the syntax for forms other than CLOBs. See Figure 79
| on page 189 for the syntax of CLOBs.
| Figure 75 shows the syntax for declarations of fixed-length character string arrays.
|
IS
level-1 variable-name PICTURE picture-string
PIC DISPLAY
IS
USAGE
OCCURS dimension .
TIMES IS
VALUE character-constant
| Notes:
| 1. level-1 indicates a COBOL level between 2 and 48.
| 2. The picture-string that is associated with these forms must be X(m) (or XX...X,
| with m instances of X), with 1 <= m <= 32767 for fixed-length strings. However,
| the maximum length of the CHAR data type (fixed-length character string) in
| DB2 is 255 bytes.
| 3. dimension must be an integer constant between 1 and 32767.
IS
49 var-1 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
SYNCHRONIZED .
SYNC IS
VALUE numeric-constant
IS
49 var-2 PICTURE picture-string
PIC DISPLAY
IS
USAGE
.
IS
VALUE character-constant
| Notes:
| 1. level-1 indicates a COBOL level between 2 and 48.
| 2. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with
| TRUNC(STD) recognizes only values up to 9999. This can cause data
| truncation errors when COBOL statements execute and might effectively limit
| the maximum length of variable-length character strings to 9999. Consider using
| the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation.
| 3. The picture-string that is associated with these forms must be X(m) (or XX...X,
| with m instances of X), with 1 <= m <= 32767 for fixed-length strings; for other
| strings, m cannot be greater than the maximum size of a varying-length
| character string.
| 4. You cannot directly reference var-1 and var-2 as host variable arrays.
| 5. You cannot use an intervening REDEFINE at level 49.
| 6. dimension must be an integer constant between 1 and 32767.
| Graphic character host variable arrays: The three valid forms for graphic
| character host variable arrays are:
| v Fixed-length strings
| v Varying-length strings
| v DBCLOBs
| The following figures show the syntax for forms other than DBCLOBs. See
| Figure 79 on page 189 for the syntax of DBCLOBs.
| Figure 77 shows the syntax for declarations of fixed-length graphic string arrays.
|
IS IS
level-1 variable-name PICTURE picture-string USAGE DISPLAY-1
PIC NATIONAL
OCCURS dimension .
TIMES IS
VALUE graphic-constant
| Notes:
| 1. level-1 indicates a COBOL level between 2 and 48.
| 2. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of
| GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater
| than the maximum size of a varying-length graphic string.
| 3. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for
| USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is
| supported only through the SQL statement coprocessor.
| 4. dimension must be an integer constant between 1 and 32767.
| Figure 78 on page 188 shows the syntax for declarations of varying-length graphic
| string arrays.
|
IS
49 var-1 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
SYNCHRONIZED .
SYNC IS
VALUE numeric-constant
IS IS
49 var-2 PICTURE picture-string USAGE DISPLAY-1
PIC NATIONAL
.
IS
VALUE graphic-constant
| Notes:
| 1. level-1 indicates a COBOL level between 2 and 48.
| 2. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with
| TRUNC(STD) recognizes only values up to 9999. This can cause data
| truncation errors when COBOL statements execute and might effectively limit
| the maximum length of variable-length character strings to 9999. Consider using
| the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation.
| 3. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of
| GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater
| than the maximum size of a varying-length graphic string.
| 4. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for
| USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is
| supported only through the SQL statement coprocessor.
| 5. You cannot directly reference var-1 and var-2 as host variable arrays.
| 6. dimension must be an integer constant between 1 and 32767.
| LOB variable arrays and locators: Figure 79 on page 189 shows the syntax for
| declarations of BLOB, CLOB, and DBCLOB host variable arrays and locators. See
| Chapter 14, “Programming for large objects (LOBs),” on page 281 for a discussion
| of how to use LOB variables.
|
| Notes:
| 1. level-1 indicates a COBOL level between 2 and 48.
| 2. dimension must be an integer constant between 1 and 32767.
| ROWIDs: Figure 80 shows the syntax for declarations of ROWID variable arrays.
| See Chapter 14, “Programming for large objects (LOBs),” on page 281 for a
| discussion of how to use these host variables.
|
| Notes:
| 1. level-1 indicates a COBOL level between 2 and 48.
| 2. dimension must be an integer constant between 1 and 32767.
A host structure name can be a group name whose subordinate levels name
elementary data items. In the following example, B is the name of a host structure
consisting of the elementary items C1 and C2.
01 A
02 B
03 C1 PICTURE ...
03 C2 PICTURE ...
When you write an SQL statement using a qualified host variable name (perhaps to
identify a field within a structure), use the name of the structure followed by a
period and the name of the field. For example, specify B.C1 rather than C1 OF B or
C1 IN B.
The precompiler does not recognize host variables or host structures on any
subordinate levels after one of these items:
v A COBOL item that must begin in area A
v Any SQL statement (except SQL INCLUDE)
v Any SQL statement within an included member
When the precompiler encounters one of the preceding items in a host structure, it
considers the structure to be complete.
level-1 variable-name .
Figure 82 shows the syntax for numeric-usage items that are used within
declarations of host structures.
COMPUTATIONAL-1
IS COMP-1 IS
USAGE COMPUTATIONAL-2 VALUE constant
COMP-2
Figure 83 on page 191 shows the syntax for integer and decimal usage items that
are used within declarations of host structures.
BINARY
IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
PACKED-DECIMAL
COMPUTATIONAL-3
COMP-3
IS
DISPLAY SIGN LEADING SEPARATE
CHARACTER
IS
VALUE constant
Figure 84 shows the syntax for CHAR inner variables that are used within
declarations of host structures.
IS
PICTURE picture-string
PIC DISPLAY
IS
USAGE
IS
VALUE constant
Figure 85 on page 192 shows the syntax for VARCHAR inner variables that are
used within declarations of host structures.
IS
49 var-2 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
IS
VALUE numeric-constant
IS
49 var-3 PICTURE picture-string
PIC DISPLAY
IS
USAGE
.
IS
VALUE character-constant
Figure 86 on page 193 shows the syntax for VARGRAPHIC inner variables that are
used within declarations of host structures.
IS
49 var-4 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
IS
VALUE numeric-constant
IS
49 var-5 PICTURE picture-string DISPLAY-1
PIC IS NATIONAL
USAGE
.
IS
VALUE graphic-constant
Notes:
1. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of
GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater
than the maximum size of a varying-length graphic string.
2. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for
USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is
supported only through the SQL statement coprocessor.
Figure 87 shows the syntax for LOB variables and locators that are used within
declarations of host structures.
Notes:
1. level-1 indicates a COBOL level between 1 and 47.
2. level-2 indicates a COBOL level between 2 and 48.
3. For elements within a structure, use any level 02 through 48 (rather than 01 or
77), up to a maximum of two levels.
4. Using a FILLER or optional FILLER item within a host structure declaration can
invalidate the whole structure.
Table 16. SQL data types the precompiler uses for COBOL declarations (continued)
SQLTYPE of host
COBOL data type variable SQLLEN of host variable SQL data type
SQL TYPE IS ROWID 904 40 ROWID
Notes:
1. Do not use this data type as a column type.
2. m is the number of double-byte characters.
Table 17 helps you define host variables that receive output from the database. You
can use the table to determine the COBOL data type that is equivalent to a given
SQL data type. For example, if you retrieve TIMESTAMP data, you can use the
table to define a suitable host variable in the program that receives the data value.
Table 17 shows direct conversions between DB2 data types and host data types.
However, a number of DB2 data types are compatible. When you do assignments
or comparisons of data that have compatible data types, DB2 does conversions
between those compatible data types. See Table 1 on page 5 for information on
compatible data types.
Table 17. SQL data types mapped to typical COBOL declarations
SQL data type COBOL data type Notes
SMALLINT S9(4) COMP-4,
S9(4) COMP-5,
S9(4) COMP,
or S9(4) BINARY
INTEGER S9(9) COMP-4,
S9(9) COMP-5,
S9(9) COMP,
or S9(9) BINARY
DECIMAL(p,s) or S9(p-s)V9(s) COMP-3 or p is precision; s is scale. 0<=s<=p<=31. If
NUMERIC(p,s) S9(p-s)V9(s) s=0, use S9(p)V or S9(p). If s=p, use
PACKED-DECIMAL SV9(s). If the COBOL compiler does not
DISPLAY SIGN support 31–digit decimal numbers, no
LEADING SEPARATE exact equivalent exists. Use COMP-2.
REAL or FLOAT (n) COMP-1 1<=n<=21
Table 17. SQL data types mapped to typical COBOL declarations (continued)
SQL data type COBOL data type Notes
VARGRAPHIC(n) Varying-length graphic string. For example, n refers to the number of double-byte
01 VAR-NAME. characters, not to the number of bytes.
49 VAR-LEN PIC S9(4) USAGE BINARY.
49 VAR-TEXT PIC G(n) The inner variables must have a level of
USAGE IS DISPLAY-1. 49.
DATE Fixed-length character string of length n. If you are using a date exit routine, n is
For example, determined by that routine. Otherwise, n
01 VAR-NAME PIC X(n). must be at least 10.
TIME Fixed-length character string of length n. If you are using a time exit routine, n is
For example, determined by that routine. Otherwise, n
01 VAR-NAME PIC X(n). must be at least 6; to include seconds, n
must be at least 8.
TIMESTAMP Fixed-length character string of length of n must be at least 19. To include
length n. For example, microseconds, n must be 26; if n is less
01 VAR-NAME PIC X(n). than 26, truncation occurs on the
microseconds part.
Result set locator SQL TYPE IS Use this data type only for receiving result
RESULT-SET-LOCATOR sets. Do not use this data type as a
column type.
Table locator SQL TYPE IS TABLE Use this data type only in a user-defined
LIKE table-name AS LOCATOR function or stored procedure to receive
rows of a transition table. Do not use this
data type as a column type.
BLOB locator USAGE IS SQL TYPE IS Use this data type only to manipulate data
BLOB-LOCATOR in BLOB columns. Do not use this data
type as a column type.
CLOB locator USAGE IS SQL TYPE IS Use this data type only to manipulate data
CLOB-LOCATOR in CLOB columns. Do not use this data
type as a column type.
DBCLOB locator USAGE IS SQL TYPE IS Use this data type only to manipulate data
DBCLOB-LOCATOR in DBCLOB columns. Do not use this data
type as a column type.
BLOB(n) USAGE IS SQL TYPE IS 1≤n≤2147483647
BLOB(n)
CLOB(n) USAGE IS SQL TYPE IS 1≤n≤2147483647
CLOB(n)
DBCLOB(n) USAGE IS SQL TYPE IS n is the number of double-byte characters.
DBCLOB(n) 1≤n≤1073741823
Controlling the CCSID: IBM Enterprise COBOL for z/OS Version 3 Release 2 or
later, and the SQL statement coprocessor for the COBOL compiler, support:
v The NATIONAL data type that is used for declaring Unicode values in the
UTF-16 format (that is, CCSID 1200)
v The COBOL CODEPAGE compiler option that is used to specify the default
EBCDIC CCSID of character data items
You can use the NATIONAL data type and the CODEPAGE compiler option to
control the CCSID of the character host variables in your application.
For example, if you declare the host variable HV1 as USAGE NATIONAL, then DB2
handles HV1 as if you had used this DECLARE VARIABLE statement:
DECLARE :HV1 VARIABLE CCSID 1200
In addition, the COBOL SQL statement coprocessor uses the CCSID that is
specified in the CODEPAGE compiler option to indicate that all host variables of
character data type, other than NATIONAL, are specified with that CCSID unless
they are explicitly overridden by a DECLARE VARIABLE statement.
SQL data types with no COBOL equivalent: If you are using a COBOL compiler
that does not support decimal numbers of more than 18 digits, use one of the
following data types to hold values of greater than 18 digits:
v A decimal variable with a precision less than or equal to 18, if the actual data
values fit. If you retrieve a decimal value into a decimal variable with a scale that
is less than the source column in the database, the fractional part of the value
might be truncated.
v An integer or a floating-point variable, which converts the value. If you choose
integer, you lose the fractional part of the number. If the decimal number might
exceed the maximum value for an integer, or if you want to preserve a fractional
value, you can use floating-point numbers. Floating-point numbers are
approximations of real numbers. Therefore, when you assign a decimal number
to a floating-point variable, the result might be different from the original number.
v A character-string host variable. Use the CHAR function to retrieve a decimal
value into it.
Special purpose COBOL data types: The locator data types are COBOL data
types and SQL data types. You cannot use locators as column types. For
information on how to use these data types, see the following sections:
Level 77 data description entries: One or more REDEFINES entries can follow
any level 77 data description entry. However, you cannot use the names in these
entries in SQL statements. Entries with the name FILLER are ignored.
SMALLINT and INTEGER data types: In COBOL, you declare the SMALLINT
and INTEGER data types as a number of decimal digits. DB2 uses the full size of
the integers (in a way that is similar to processing with the TRUNC(BIN) compiler
option) and can place larger values in the host variable than would be allowed in
the specified number of digits in the COBOL declaration. If you compile with
TRUNC(OPT) or TRUNC(STD), ensure that the size of numbers in your application
is within the declared number of digits.
For small integers that can exceed 9999, use S9(4) COMP-5 or compile with
TRUNC(BIN). For large integers that can exceed 999 999 999, use S9(10) COMP-3
to obtain the decimal data type. If you use COBOL for integers that exceed the
COBOL PICTURE, specify the column as decimal to ensure that the data types
match and perform well.
Similarly, retrieving a column value with DECIMAL data type into a COBOL decimal
variable with a lower precision might truncate the value.
v Character data types are partially compatible with CLOB locators. You can
perform the following assignments:
– Assign a value in a CLOB locator to a CHAR or VARCHAR column
– Use a SELECT INTO statement to assign a CHAR or VARCHAR column to a
CLOB locator host variable.
– Assign a CHAR or VARCHAR output parameter from a user-defined function
or stored procedure to a CLOB locator host variable.
– Use a SET assignment statement to assign a CHAR or VARCHAR transition
variable to a CLOB locator host variable.
– Use a VALUES INTO statement to assign a CHAR or VARCHAR function
parameter to a CLOB locator host variable.
However, you cannot use a FETCH statement to assign a value in a CHAR or
VARCHAR column to a CLOB locator host variable.
v Graphic data types are compatible with each other. A GRAPHIC, VARGRAPHIC,
or DBCLOB column is compatible with a fixed-length or varying-length COBOL
graphic string host variable.
v Graphic data types are partially compatible with DBCLOB locators. You can
perform the following assignments:
– Assign a value in a DBCLOB locator to a GRAPHIC or VARGRAPHIC column
– Use a SELECT INTO statement to assign a GRAPHIC or VARGRAPHIC
column to a DBCLOB locator host variable.
– Assign a GRAPHIC or VARGRAPHIC output parameter from a user-defined
function or stored procedure to a DBCLOB locator host variable.
– Use a SET assignment statement to assign a GRAPHIC or VARGRAPHIC
transition variable to a DBCLOB locator host variable.
– Use a VALUES INTO statement to assign a GRAPHIC or VARGRAPHIC
function parameter to a DBCLOB locator host variable.
However, you cannot use a FETCH statement to assign a value in a GRAPHIC
or VARGRAPHIC column to a DBCLOB locator host variable.
v Datetime data types are compatible with character host variables. A DATE, TIME,
or TIMESTAMP column is compatible with a fixed-length or varying length
COBOL character host variable.
v A BLOB column or a BLOB locator is compatible only with a BLOB host variable.
v The ROWID column is compatible only with a ROWID host variable.
v A host variable is compatible with a distinct type if the host variable type is
compatible with the source type of the distinct type. For information on assigning
and comparing distinct types, see Chapter 16, “Creating and using distinct types,”
on page 349.
Using indicator variables: If you provide an indicator variable for the variable X,
when DB2 retrieves a null value for X, it puts a negative value in the indicator
variable and does not update X. Your program should check the indicator variable
before using X. If the indicator variable is negative, you know that X is null and any
value you find in X is irrelevant.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X. For more information about indicator variables,
see “Using indicator variables with host variables” on page 75.
| Using indicator variable arrays: When you retrieve data into a host variable array,
| if a value in its indicator array is negative, you can disregard the contents of the
| corresponding element in the host variable array. For more information about
| indicator variable arrays, see “Using indicator variable arrays with host variable
| arrays” on page 79.
Declaring indicator variables: You declare indicator variables in the same way as
host variables. You can mix the declarations of the two types of variables in any
way that seems appropriate. You can define indicator variables as scalar variables
or as array elements in a structure form or as an array variable using a single level
OCCURS clause.
Example: The following example shows a FETCH statement with the declarations
of the host variables that are needed for the FETCH statement:
EXEC SQL FETCH CLS_CURSOR INTO :CLS-CD,
:DAY :DAY-IND,
:BGN :BGN-IND,
:END :END-IND
END-EXEC.
IS
01 variable-name PICTURE S9(4) BINARY
77 PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
IS
VALUE constant
Declaring indicator variable arrays: Figure 89 shows the syntax for valid indicator
array declarations.
IS
level-1 variable-name PICTURE S9(4)
PIC S9999 IS
USAGE
Notes:
1. level-1 must be an integer between 2 and 48.
2. dimension must be an integer constant between 1 and 32767.
| You can also use the MESSAGE_TEXT condition item field of the GET
| DIAGNOSTICS statement to convert an SQL return code into a text message.
| Programs that require long token message support should code the GET
| DIAGNOSTICS statement instead of DSNTIAR. For more information about GET
| DIAGNOSTICS, see “Using the GET DIAGNOSTICS statement” on page 84.
DSNTIAR syntax
CALL ’DSNTIAR’ USING sqlca message lrecl.
INDEXED BY ERROR-INDEX.
77
. ERROR-TEXT-LEN PIC S9(9) COMP VALUE +132.
.
.
CALL ’DSNTIAR’ USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.
CICS
If you call DSNTIAR dynamically from a CICS COBOL application program, be
sure you do the following:
v Compile the COBOL application with the NODYNAM option.
v Define DSNTIAR in the CSD.
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL ’DSNTIAC’ USING eib commarea sqlca msg lrecl.
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
Where to place SQL statements in your application: A COBOL source data set
or member can contain the following elements:
v Multiple programs
v Multiple class definitions, each of which contains multiple methods
You can put SQL statements in only the first program or class in the source data
set or member. However, you can put SQL statements in multiple methods within a
class. If an application consists of multiple data sets or members, each of the data
sets or members can contain SQL statements.
Where to place the SQLCA, SQLDA, and host variable declarations: You can
put the SQLCA, SQLDA, and SQL host variable declarations in the
WORKING-STORAGE SECTION of a program, class, or method. An SQLCA or
SQLDA in a class WORKING-STORAGE SECTION is global for all the methods of
the class. An SQLCA or SQLDA in a method WORKING-STORAGE SECTION is
local to that method only.
If a class and a method within the class both contain an SQLCA or SQLDA, the
method uses the SQLCA or SQLDA that is local.
Rules for host variables: You can declare COBOL variables that are used as host
variables in the WORKING-STORAGE SECTION or LINKAGE-SECTION of a
program, class, or method. You can also declare host variables in the
LOCAL-STORAGE SECTION of a method. The scope of a host variable is the
method, class, or program within which it is defined.
DB2 sets the SQLCOD and SQLSTA (or SQLSTATE) values after each SQL
statement executes. An application can check these values to determine whether
the last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCOD and SQLSTA (or SQLSTATE)
variables.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
Unlike the SQLCA, a program can have more than one SQLDA, and an SQLDA
can have any valid name. DB2 does not support the INCLUDE SQLDA statement
for Fortran programs. If present, an error message results.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor.
You can code SQL statements in a Fortran program wherever you can place
executable statements. If the SQL statement is within an IF statement, the
precompiler generates any necessary THEN and END IF statements.
Each SQL statement in a Fortran program must begin with EXEC SQL. The EXEC
and SQL keywords must appear on one line, but the remainder of the statement
can appear on subsequent lines.
You cannot follow an SQL statement with another SQL statement or Fortran
statement on the same line.
Fortran does not require blanks to delimit words within a statement, but the SQL
language requires blanks. The rules for embedded SQL follow the rules for SQL
syntax, which require you to use one or more blanks as a delimiter.
Comments: You can include Fortran comment lines within embedded SQL
statements wherever you can use a blank, except between the keywords EXEC and
| SQL. You can include SQL comments in any embedded SQL statement.
The DB2 precompiler does not support the exclamation point (!) as a comment
recognition character in Fortran programs.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for Fortran statements, except that you must specify EXEC
SQL on one line. The SQL examples in this section have Cs in the sixth column to
indicate that they are continuations of EXEC SQL.
Declaring tables and views: Your Fortran program should also include the
DECLARE TABLE statement to describe each table and view the program
accesses.
You can use a Fortran character variable in the statements PREPARE and
EXECUTE IMMEDIATE, even if it is fixed-length.
You cannot nest SQL INCLUDE statements. You cannot use the Fortran INCLUDE
compiler directive to include SQL statements or Fortran host variable declarations.
Margins: Code the SQL statements between columns 7 through 72, inclusive. If
EXEC SQL starts before the specified left margin, the DB2 precompiler does not
recognize the SQL statement.
Names: You can use any valid Fortran name for a host variable. Do not use
external entry names that begin with ’DSN’ or host variable names that begin with
’SQL’. These names are reserved for DB2.
Do not use the word DEBUG, except when defining a Fortran DEBUG packet. Do
not use the words FUNCTION, IMPLICIT, PROGRAM, and SUBROUTINE to define
variables.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
Statement labels: You can specify statement numbers for SQL statements in
columns 1 to 5. However, during program preparation, a labeled SQL statement
generates a Fortran CONTINUE statement with that label before it generates the
code that executes the SQL statement. Therefore, a labeled SQL statement should
never be the last statement in a DO loop. In addition, you should not label SQL
statements (such as INCLUDE and BEGIN DECLARE SECTION) that occur before
the first executable SQL statement, because an error might occur.
WHENEVER statement: The target for the GOTO clause in the SQL WHENEVER
statement must be a label in the Fortran source code and must refer to a statement
in the same subprogram. The WHENEVER statement only applies to SQL
statements in the same subprogram.
DB2 supports Version 3 Release 1 (or later) of VS Fortran with the following
restrictions:
v The parallel option is not supported. Applications that contain SQL statements
must not use Fortran parallelism.
v You cannot use the byte data type within embedded SQL, because byte is not a
recognizable host data type.
You can precede Fortran statements that define the host variables with a BEGIN
DECLARE SECTION statement and follow the statements with an END DECLARE
SECTION statement. You must use the BEGIN DECLARE SECTION and END
DECLARE SECTION statements when you use the precompiler option
STDSQL(YES).
The names of host variables should be unique within the program, even if the host
variables are in different blocks, functions, or subroutines.
When you declare a character host variable, you must not use an expression to
define the length of the character variable. You can use a character host variable
with an undefined length (for example, CHARACTER *(*)). The length of any such
variable is determined when its associated SQL statement executes.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
Be careful when calling subroutines that might change the attributes of a host
variable. Such alteration can cause an error while the program is running. See
Appendix C of DB2 SQL Reference for more information.
Numeric host variables: Figure 90 shows the syntax for declarations of numeric
host variables.
INTEGER*2 variable-name
*4 / numeric-constant /
INTEGER
*4
REAL
REAL*8
DOUBLE PRECISION
Character host variables: Figure 91 shows the syntax for declarations of character
host variables other than CLOBs. See Figure 93 on page 208 for the syntax of
CLOBs.
CHARACTER variable-name
*n *n / character-constant /
Result set locators: Figure 92 shows the syntax for declarations of result set
locators. See Chapter 25, “Using stored procedures for client/server processing,” on
page 569 for a discussion of how to use these host variables.
LOB Variables and Locators: Figure 93 on page 208 shows the syntax for
declarations of BLOB and CLOB host variables and locators. See Chapter 14,
“Programming for large objects (LOBs),” on page 281 for a discussion of how to
use these host variables.
ROWIDs: Figure 94 shows the syntax for declarations of ROWID variables. See
Chapter 14, “Programming for large objects (LOBs),” on page 281 for a discussion
of how to use these host variables.
Table 19 on page 209 helps you define host variables that receive output from the
database. You can use the table to determine the Fortran data type that is
equivalent to a given SQL data type. For example, if you retrieve TIMESTAMP data,
you can use the table to define a suitable host variable in the program that receives
the data value.
Table 19 shows direct conversions between DB2 data types and host data types.
However, a number of DB2 data types are compatible. When you do assignments
or comparisons of data that have compatible data types, DB2 does conversions
between those compatible data types. See Table 1 on page 5 for information on
compatible data types.
Table 19. SQL data types mapped to typical Fortran declarations
SQL data type Fortran equivalent Notes
SMALLINT INTEGER*2
INTEGER INTEGER*4
DECIMAL(p,s) or no exact equivalent Use REAL*8
NUMERIC(p,s)
FLOAT(n) single precision REAL*4 1<=n<=21
FLOAT(n) double precision REAL*8 22<=n<=53
CHAR(n) CHARACTER*n 1<=n<=255
VARCHAR(n) no exact equivalent Use a character host variable that is large
enough to contain the largest expected
VARCHAR value.
GRAPHIC(n) not supported
VARGRAPHIC(n) not supported
DATE CHARACTER*n If you are using a date exit routine, n is
determined by that routine; otherwise, n
must be at least 10.
TIME CHARACTER*n If you are using a time exit routine, n is
determined by that routine. Otherwise, n
must be at least 6; to include seconds, n
must be at least 8.
TIMESTAMP CHARACTER*n n must be at least 19. To include
microseconds, n must be 26; if n is less
than 26, truncation occurs on the
microseconds part.
Result set locator SQL TYPE IS RESULT_SET_LOCATOR Use this data type only for receiving result
sets. Do not use this data type as a
column type.
BLOB locator SQL TYPE IS BLOB_LOCATOR Use this data type only to manipulate data
in BLOB columns. Do not use this data
type as a column type.
CLOB locator SQL TYPE IS CLOB_LOCATOR Use this data type only to manipulate data
in CLOB columns. Do not use this data
type as a column type.
DBCLOB locator not supported
BLOB(n) SQL TYPE IS BLOB(n) 1≤n≤2147483647
CLOB(n) SQL TYPE IS CLOB(n) 1≤n≤2147483647
DBCLOB(n) not supported
ROWID SQL TYPE IS ROWID
Fortran data types with no SQL equivalent: Fortran supports some data types
with no SQL equivalent (for example, REAL*16 and COMPLEX). In most cases, you
can use Fortran statements to convert between the unsupported data types and the
data types that SQL allows.
SQL data types with no Fortran equivalent: Fortran does not provide an
equivalent for the decimal data type. To hold the value of such a variable, you can
use:
v An integer or floating-point variable, which converts the value. If you choose
integer, however, you lose the fractional part of the number. If the decimal
number can exceed the maximum value for an integer or you want to preserve a
fractional value, you can use floating-point numbers. Floating-point numbers are
approximations of real numbers. When you assign a decimal number to a
floating-point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
Special-purpose Fortran data types: The locator data types are Fortran data
types and SQL data types. You cannot use locators as column types. For
information on how to use these data types, see the following sections:
Result set locator
Chapter 25, “Using stored procedures for client/server processing,”
on page 569
LOB locators Chapter 14, “Programming for large objects (LOBs),” on page 281
Processing Unicode data: Because Fortran does not support graphic data types,
Fortran applications can process only Unicode tables that use UTF-8 encoding.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 75.
Example: The following example shows a FETCH statement with the declarations
of the host variables that are needed for the FETCH statement:
EXEC SQL FETCH CLS_CURSOR INTO :CLSCD,
C :DAY :DAYIND,
C :BGN :BGNIND,
C :END :ENDIND
INTEGER*2 variable-name
/ numeric-constant /
| You can also use the MESSAGE_TEXT condition item field of the GET
| DIAGNOSTICS statement to convert an SQL return code into a text message.
| Programs that require long token message support should code the GET
| DIAGNOSTICS statement instead of DSNTIAR. For more information about GET
| DIAGNOSTICS, see “Using the GET DIAGNOSTICS statement” on page 84.
DSNTIR syntax
CALL DSNTIR ( error-length, message, return-code )
.
.
.
CALL DSNTIR ( ERRLEN, ERRTXT, ICODE )
where ERRLEN is the total length of the message output area, ERRTXT is the
name of the message output area, and ICODE is the return code.
return-code
Accepts a return code from DSNTIAR.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these values to determine whether the last SQL
statement was successful. All SQL statements in the program must be within the
scope of the declaration of the SQLCODE and SQLSTATE variables.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
You must declare an SQLDA before the first SQL statement that references that
data descriptor, unless you use the precompiler option TWOPASS. See Chapter 5
of DB2 SQL Reference for more information about the INCLUDE statement and
Appendix C of DB2 SQL Reference for a complete description of SQLDA fields.
You can code SQL statements in a PL/I program wherever you can use executable
statements.
Each SQL statement in a PL/I program must begin with EXEC SQL and end with a
semicolon (;). The EXEC and SQL keywords must appear must appear on one line,
but the remainder of the statement can appear on subsequent lines.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for other PL/I statements, except that you must specify
EXEC SQL on one line.
Declaring tables and views: Your PL/I program should include a DECLARE
TABLE statement to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. For more information, see Chapter 8, “Generating declarations for your
tables using DCLGEN,” on page 121.
Including code: You can use SQL statements or PL/I host variable declarations
from a member of a partitioned data set by using the following SQL statement in the
source code where you want to include the statements:
EXEC SQL INCLUDE member-name;
You cannot nest SQL INCLUDE statements. Do not use the PL/I %INCLUDE
statement to include SQL statements or host variable DCL statements. You must
use the PL/I preprocessor to resolve any %INCLUDE statements before you use
the DB2 precompiler. Do not use PL/I preprocessor directives within SQL
statements.
Margins: Code SQL statements in columns 2 through 72, unless you have
specified other margins to the DB2 precompiler. If EXEC SQL starts before the
specified left margin, the DB2 precompiler does not recognize the SQL statement.
Names: You can use any valid PL/I name for a host variable. Do not use external
entry names or access plan names that begin with ’DSN’, and do not use host
variable names that begin with ’SQL’. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers. IEL0378I messages from the PL/I compiler
identify lines of code without sequence numbers. You can ignore these messages.
Statement labels: You can specify a statement label for executable SQL
statements. However, the INCLUDE text-file-name and END DECLARE SECTION
statements cannot have statement labels.
Whenever statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a label in the PL/I source code and must be within the scope
of any SQL statements that WHENEVER affects.
v If you use graphic string constants or mixed data in dynamically prepared SQL
statements, and if your application requires the PL/I Version 2 (or later) compiler,
the dynamically prepared statements must use the PL/I mixed constant format.
– If you prepare the statement from a host variable, change the string
assignment to a PL/I mixed string.
– If you prepare the statement from a PL/I string, change that to a host variable,
and then change the string assignment to a PL/I mixed string.
Example:
SQLSTMT = ’SELECT <dbdb> FROM table-name’M;
EXEC SQL PREPARE STMT FROM :SQLSTMT;
| You can precede PL/I statements that define the host variables and host variable
| arrays with the BEGIN DECLARE SECTION statement, and follow the statements
| with the END DECLARE SECTION statement. You must use the BEGIN DECLARE
| SECTION and END DECLARE SECTION statements when you use the
| precompiler option STDSQL(YES).
| A colon (:) must precede all host variables and host variable arrays in an SQL
| statement, with the following exception. If the SQL statement meets the following
| conditions, a host variable or host variable array in the SQL statement cannot be
| preceded by a colon:
| v The SQL statement is an EXECUTE IMMEDIATE or PREPARE statement.
| v The SQL statement is in a program that also contains a DECLARE VARIABLE
| statement.
| v The host variable is part of a string expression, but the host variable is not the
| only component of the string expression.
| The names of host variables and host variable arrays should be unique within the
| program, even if the variables and variable arrays are in different blocks or
| procedures. You can qualify the names with a structure name to make them unique.
| An SQL statement that uses a host variable or host variable array must be within
| the scope of the statement that declares that variable or array. You define host
| variable arrays for use with multiple-row FETCH and multiple-row INSERT
| statements.
The precompiler uses only the names and data attributes of the variables; it ignores
the alignment, scope, and storage attributes. Even though the precompiler ignores
alignment, scope, and storage, if you ignore the restrictions on their use, you might
have problems compiling the PL/I source code that the precompiler generates.
These restrictions are as follows:
v A declaration with the EXTERNAL scope attribute and the STATIC storage
attribute must also have the INITIAL storage attribute.
v If you use the BASED storage attribute, you must follow it with a PL/I
element-locator-expression.
Numeric host variables: Figure 96 shows the syntax for declarations of numeric
host variables.
FIXED
( precision ) Alignment and/or Scope and/or Storage
,scale
FLOAT ( precision )
Notes:
1. You can specify host variable attributes in any order that is acceptable to PL/I.
For example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and FIXED
BIN(31) are all acceptable.
2. You can specify a scale only for DECIMAL FIXED.
Character host variables: Figure 97 shows the syntax for declarations of character
host variables, other than CLOBs. See Figure 101 on page 219 for the syntax of
CLOBs.
Alignment and/or Scope and/or Storage
Graphic host variables: Figure 98 shows the syntax for declarations of graphic
host variables, other than DBCLOBs. See Figure 101 on page 219 for the syntax of
DBCLOBs.
Alignment and/or Scope and/or Storage
Result set locators: Figure 99 shows the syntax for declarations of result set
locators. See Chapter 25, “Using stored procedures for client/server processing,” on
page 569 for a discussion of how to use these host variables.
( variable-name )
Alignment and/or Scope and/or Storage
Table locators: Figure 100 shows the syntax for declarations of table locators. See
“Accessing transition tables in a user-defined function or stored procedure” on page
328 for a discussion of how to use these host variables.
( variable-name )
LOB variables and locators: Figure 101 shows the syntax for declarations of
BLOB, CLOB, and DBCLOB host variables and locators. See Chapter 14,
“Programming for large objects (LOBs),” on page 281 for a discussion of how to
use these host variables.
( variable-name )
Note: Variable attributes such as STATIC and AUTOMATIC are ignored if specified
on a LOB variable declaration.
ROWIDs: Figure 102 shows the syntax for declarations of ROWID host variables.
See Chapter 14, “Programming for large objects (LOBs),” on page 281 for a
discussion of how to use these host variables.
( variable-name )
| The precompiler uses only the names and data attributes of the variable arrays; it
| ignores the alignment, scope, and storage attributes. Even though the precompiler
| ignores alignment, scope, and storage, if you ignore the restrictions on their use,
| you might have problems compiling the PL/I source code that the precompiler
| generates. These restrictions are as follows:
| v A declaration with the EXTERNAL scope attribute and the STATIC storage
| attribute must also have the INITIAL storage attribute.
| v If you use the BASED storage attribute, you must follow it with a PL/I
| element-locator-expression.
| v Host variables can be STATIC, CONTROLLED, BASED, or AUTOMATIC storage
| class or options. However, CICS requires that programs be reentrant.
| Numeric host variable arrays: Figure 103 shows the syntax for declarations of
| numeric host variable arrays.
|
( variable-name )
,
( variable-name ( dimension ) )
BINARY FIXED
BIN ( precision )
DECIMAL ,scale
DEC FLOAT ( precision )
Alignment and/or Scope and/or Storage
| Notes:
| 1. You can specify host variable array attributes in any order that is acceptable to
| PL/I. For example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and
| FIXED BIN(31) are all acceptable.
| 2. You can specify the scale for only DECIMAL FIXED.
| 3. dimension must be an integer constant between 1 and 32767.
| Character host variable arrays: Figure 104 shows the syntax for declarations of
| character host variable arrays, other than CLOBs. See Figure 106 on page 222 for
| the syntax of CLOBs.
|
( variable-name )
,
( variable-name ( dimension ) )
CHARACTER ( length )
CHAR VARYING Alignment and/or Scope and/or Storage
VAR
| Notes:
| 1. dimension must be an integer constant between 1 and 32767.
| Graphic host variable arrays: Figure 105 on page 222 shows the syntax for
| declarations of graphic host variable arrays, other than DBCLOBs. See Figure 106
| on page 222 for the syntax of DBCLOBs.
|
( variable-name )
,
( variable-name ( dimension ) )
GRAPHIC ( length )
VARYING Alignment and/or Scope and/or Storage
VAR
| Notes:
| 1. dimension must be an integer constant between 1 and 32767.
| LOB variable arrays and locators: Figure 106 shows the syntax for declarations
| of BLOB, CLOB, and DBCLOB host variable arrays and locators. See Chapter 14,
| “Programming for large objects (LOBs),” on page 281 for a discussion of how to
| use these host variables.
|
( variable-name )
,
( variable-name ( dimension ) )
| Notes:
| 1. dimension must be an integer constant between 1 and 32767.
| ROWIDs: Figure 107 on page 223 shows the syntax for declarations of ROWID
| variable arrays. See Chapter 14, “Programming for large objects (LOBs),” on page
| 281 for a discussion of how to use these host variables.
|
( variable-name )
,
( variable-name ( dimension ) )
| Notes:
| 1. dimension must be an integer constant between 1 and 32767.
In this example, B is the name of a host structure consisting of the scalars C1 and
C2.
You can use the structure name as shorthand notation for a list of scalars. You can
qualify a host variable with a structure name (for example, STRUCTURE.FIELD).
Host structures are limited to two levels. You can think of a host structure for DB2
data as a named group of host variables.
You must terminate the host structure variable by ending the declaration with a
semicolon. For example:
DCL 1 A,
2 B CHAR,
2 (C, D) CHAR;
DCL (E, F) CHAR;
You can specify host variable attributes in any order that is acceptable to PL/I. For
example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable.
Figure 108 on page 224 shows the syntax for declarations of host structures.
( var-2 )
Figure 109 shows the syntax for data types that are used within declarations of host
structures.
BINARY FIXED
BIN ( precision )
DECIMAL , scale
DEC FLOAT
( precision )
CHARACTER
CHAR ( integer ) VARYING
VARY
GRAPHIC
( integer ) VARYING
VARY
SQL TYPE IS ROWID
LOB data type
Figure 109 shows the syntax for LOB data types that are used within declarations of
host structures.
Table 20. SQL data types the precompiler uses for PL/I declarations
SQLTYPE of host
PL/I data type variable SQLLEN of host variable SQL data type
BIN FIXED(n) 1<=n<=15 500 2 SMALLINT
BIN FIXED(n) 16<=n<=31 496 4 INTEGER
DEC FIXED(p,s) 0<=p<=31 and 484 p in byte 1, s in byte 2 DECIMAL(p,s)
0<=s<=p1
BIN FLOAT(p) 1<=p<=21 480 4 REAL or FLOAT(n) 1<=n<=21
BIN FLOAT(p) 22<=p<=53 480 8 DOUBLE PRECISION or
FLOAT(n) 22<=n<=53
DEC FLOAT(m) 1<=m<=6 480 4 FLOAT (single precision)
DEC FLOAT(m) 7<=m<=16 480 8 FLOAT (double precision)
CHAR(n) 452 n CHAR(n)
CHAR(n) VARYING 1<=n<=255 448 n VARCHAR(n)
CHAR(n) VARYING n>255 456 n VARCHAR(n)
GRAPHIC(n) 468 n GRAPHIC(n)
GRAPHIC(n) VARYING 464 n VARGRAPHIC(n)
1<=n<=127
GRAPHIC(n) VARYING n>127 472 n VARGRAPHIC(n)
SQL TYPE IS 972 4 Result set locator2
RESULT_SET_LOCATOR
SQL TYPE IS TABLE LIKE 976 4 Table locator2
table-name AS LOCATOR
SQL TYPE IS BLOB_LOCATOR 960 4 BLOB locator2
SQL TYPE IS CLOB_LOCATOR 964 4 CLOB locator2
SQL TYPE IS 968 4 DBCLOB locator2
DBCLOB_LOCATOR
SQL TYPE IS BLOB(n) 404 n BLOB(n)
1≤n≤2147483647
SQL TYPE IS CLOB(n) 408 n CLOB(n)
1≤n≤2147483647
SQL TYPE IS DBCLOB(n) 412 n DBCLOB(n)3
1≤n≤10737418233
SQL TYPE IS ROWID 904 40 ROWID
Notes:
1. If p=0, DB2 interprets it as DECIMAL(31). For example, DB2 interprets a PL/I data type of DEC FIXED(0,0) to be
DECIMAL(31,0), which equates to the SQL data type of DECIMAL(31,0).
2. Do not use this data type as a column type.
3. n is the number of double-byte characters.
Table 21 on page 226 helps you define host variables that receive output from the
database. You can use the table to determine the PL/I data type that is equivalent
to a given SQL data type. For example, if you retrieve TIMESTAMP data, you can
use the table to define a suitable host variable in the program that receives the data
value.
Table 21 shows direct conversions between DB2 data types and host data types.
However, a number of DB2 data types are compatible. When you do assignments
or comparisons of data that have compatible data types, DB2 does conversions
between those compatible data types. See Table 1 on page 5 for information on
compatible data types.
Table 21. SQL data types mapped to typical PL/I declarations
SQL data type PL/I equivalent Notes
SMALLINT BIN FIXED(n) 1<=n<=15
INTEGER BIN FIXED(n) 16<=n<=31
DECIMAL(p,s) or If p<16: DEC FIXED(p) or DEC p is precision; s is scale. 1<=p<=31 and
NUMERIC(p,s) FIXED(p,s) 0<=s<=p
Table 21. SQL data types mapped to typical PL/I declarations (continued)
SQL data type PL/I equivalent Notes
BLOB(n) SQL TYPE IS BLOB(n) 1≤n≤2147483647
CLOB(n) SQL TYPE IS CLOB(n) 1≤n≤2147483647
DBCLOB(n) SQL TYPE IS DBCLOB(n) n is the number of double-byte characters.
1≤n≤1073741823
ROWID SQL TYPE IS ROWID
PL/I data types with no SQL equivalent: PL/I supports some data types with no
SQL equivalent (COMPLEX and BIT variables, for example). In most cases, you
can use PL/I statements to convert between the unsupported PL/I data types and
the data types that SQL supports.
SQL data types with no PL/I equivalent: If the PL/I compiler you are using does
not support a decimal data type with a precision greater than 15, use the following
types of variables for decimal data:
v Decimal variables with precision less than or equal to 15, if the actual data
values fit. If you retrieve a decimal value into a decimal variable with a scale that
is less than the source column in the database, the fractional part of the value
might truncate.
v An integer or a floating-point variable, which converts the value. If you choose
integer, you lose the fractional part of the number. If the decimal number can
exceed the maximum value for an integer or you want to preserve a fractional
value, you can use floating-point numbers. Floating-point numbers are
approximations of real numbers. When you assign a decimal number to a
floating- point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
Special purpose PL/I data types: The locator data types are PL/I data types as
well as SQL data types. You cannot use locators as column types. For information
on how to use these data types, see the following sections:
Result set locator
Chapter 25, “Using stored procedures for client/server processing,”
on page 569
Table locator “Accessing transition tables in a user-defined function or stored
procedure” on page 328
LOB locators Chapter 14, “Programming for large objects (LOBs),” on page 281
PL/I scoping rules: The precompiler does not support PL/I scoping rules.
Similarly, retrieving a column value with a DECIMAL data type into a PL/I decimal
variable with a lower precision might truncate the value.
Using indicator variables: If you provide an indicator variable for the variable X,
when DB2 retrieves a null value for X, it puts a negative value in the indicator
variable and does not update X. Your program should check the indicator variable
before using X. If the indicator variable is negative, you know that X is null and any
value you find in X is irrelevant.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
| Using indicator variable arrays: When you retrieve data into a host variable array,
| if a value in its indicator array is negative, you can disregard the contents of the
| corresponding element in the host variable array. For more information about
| indicator variable arrays, see “Using indicator variable arrays with host variable
| arrays” on page 79.
Declaring indicator variables: You declare indicator variables in the same way as
host variables. You can mix the declarations of the two types of variables in any
way that seems appropriate. For more information about indicator variables, see
“Using indicator variables with host variables” on page 75.
Example: The following example shows a FETCH statement with the declarations
of the host variables that are needed for the FETCH statement:
EXEC SQL FETCH CLS_CURSOR INTO :CLS_CD,
:DAY :DAY_IND,
:BGN :BGN_IND,
:END :END_IND;
You can specify host variable attributes in any order that is acceptable to PL/I. For
example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable.
Declaring indicator arrays: Figure 112 shows the syntax for declarations of
indicator arrays.
( variable-name ( dimension ) )
FIXED(15) ;
Alignment and/or Scope and/or Storage
Notes:
1. dimension must be an integer constant between 1 and 32767.
| You can also use the MESSAGE_TEXT condition item field of the GET
| DIAGNOSTICS statement to convert an SQL return code into a text message.
| Programs that require long token message support should code the GET
| DIAGNOSTICS statement instead of DSNTIAR. For more information about GET
| DIAGNOSTICS, see “Using the GET DIAGNOSTICS statement” on page 84.
DSNTIAR syntax
CALL DSNTIAR ( sqlca, message, lrecl );
sqlca
An SQL communication area.
message
An output area, in VARCHAR format, in which DSNTIAR places the message
text. The first halfword contains the length of the remaining area; its minimum
value is 240.
The output lines of text, each line being the length specified in lrecl, are put into
this area. For example, you could specify the format of the output area as:
DCL DATA_LEN FIXED BIN(31) INIT(132);
DCL DATA_DIM FIXED BIN(31) INIT(10);
DCL 1 ERROR_MESSAGE AUTOMATIC,
3 ERROR_LEN FIXED BIN(15) UNAL INIT((DATA_LEN*DATA_DIM)),
. 3 ERROR_TEXT(DATA_DIM) CHAR(DATA_LEN);
.
.
CALL DSNTIAR ( SQLCA, ERROR_MESSAGE, DATA_LEN );
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC (eib, commarea, sqlca, msg, lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variable values to determine whether the
last SQL statement was successful.
See Appendix C of DB2 SQL Reference for information on the fields in the REXX
SQLCA.
A REXX procedure can contain more than one SQLDA. Each SQLDA consists of a
set of REXX variables with a common stem. The stem must be a REXX variable
name that contains no periods and is the same as the value of descriptor-name that
you specify when you use the SQLDA in an SQL statement. DB2 does not support
the INCLUDE SQLDA statement in REXX.
See Appendix C of DB2 SQL Reference for information on the fields in a REXX
SQLDA.
’CONNECT’ 'subsystem-ID'
ADDRESS DSNREXX REXX-variable
Note: CALL SQLDBS ’ATTACH TO’ ssid is equivalent to ADDRESS DSNREXX ’CONNECT’ ssid.
EXECSQL
Executes SQL statements in REXX procedures. The syntax of EXECSQL is:
EXECSQL "SQL-statement"
ADDRESS DSNREXX REXX-variable
Notes:
1. CALL SQLEXEC is equivalent to EXECSQL.
2. EXECSQL can be enclosed in single or double quotation marks.
See “Embedding SQL statements in a REXX procedure” on page 235 for more
information.
DISCONNECT
Disconnects the REXX procedure from a DB2 subsystem. You should execute
DISCONNECT to release resources that are held by DB2. The syntax of
DISCONNECT is:
’DISCONNECT’
ADDRESS DSNREXX
These application programming interfaces are available through the DSNREXX host
command environment. To make DSNREXX available to the application, invoke the
RXSUBCOM function. The syntax is:
The ADD function adds DSNREXX to the REXX host command environment table.
The DELETE function deletes DSNREXX from the REXX host command
environment table.
Figure 113 on page 235 shows an example of REXX code that makes DSNREXX
available to an application.
Each SQL statement in a REXX procedure must begin with EXECSQL, in either
upper-, lower-, or mixed-case. One of the following items must follow EXECSQL:
v An SQL statement enclosed in single or double quotation marks.
v A REXX variable that contains an SQL statement. The REXX variable must not
be preceded by a colon.
For example, you can use either of the following methods to execute the COMMIT
statement in a REXX procedure:
EXECSQL "COMMIT"
rexxvar="COMMIT"
EXECSQL rexxvar
An SQL statement follows rules that apply to REXX commands. The SQL statement
can optionally end with a semicolon and can be enclosed in single or double
quotation marks, as in the following example:
’EXECSQL COMMIT’;
Comments: You cannot include REXX comments (/* ... */) or SQL comments (--)
within SQL statements. However, you can include REXX comments anywhere else
in the procedure.
Continuation for SQL statements: SQL statements that span lines follow REXX
rules for statement continuation. You can break the statement into several strings,
each of which fits on a line, and separate the strings with commas or with
concatenation operators followed by commas. For example, either of the following
statements is valid:
EXECSQL ,
"UPDATE DSN8810.DEPT" ,
"SET MGRNO = ’000010’" ,
"WHERE DEPTNO = ’D11’"
"EXECSQL " || ,
" UPDATE DSN8810.DEPT " || ,
" SET MGRNO = ’000010’" || ,
" WHERE DEPTNO = ’D11’"
Including code: The EXECSQL INCLUDE statement is not valid for REXX. You
therefore cannot include externally defined SQL statements in a procedure.
Margins: Like REXX commands, SQL statements can begin and end anywhere on
a line.
Names: You can use any valid REXX name that does not end with a period as a
host variable. However, host variable names should not begin with ’SQL’, ’RDI’,
’DSN’, ’RXSQL’, or ’QRW’. Variable names can be at most 64 bytes.
Nulls: A REXX null value and an SQL null value are different. The REXX language
has a null string (a string of length 0) and a null clause (a clause that contains only
blanks and comments). The SQL null value is a special value that is distinct from all
nonnull values and denotes the absence of a value. Assigning a REXX null value to
a DB2 column does not make the column value null.
Statement labels: You can precede an SQL statement with a label, in the same
way that you label REXX commands.
Handling errors and warnings: DB2 does not support the SQL WHENEVER
statement in a REXX procedure. To handle SQL errors and warnings, use the
following methods:
v To test for SQL errors or warnings, test the SQLCODE or SQLSTATE value and
the SQLWARN. values after each EXECSQL call. This method does not detect
errors in the REXX interface to DB2.
v To test for SQL errors or warnings or errors or warnings from the REXX interface
to DB2, test the REXX RC variable after each EXECSQL call. Table 22 lists the
values of the RC variable.
You can also use the REXX SIGNAL ON ERROR and SIGNAL ON FAILURE
keyword instructions to detect negative values of the RC variable and transfer
control to an error routine.
Table 22. REXX return codes after SQL statements
Return code Meaning
0 No SQL warning or error occurred.
+1 An SQL warning occurred.
-1 An SQL error occurred.
| -3 The first token after ADDRESS DSNREXX is in error. For a description of
| the tokens allowed, see “Accessing the DB2 REXX Language Support
| application programming interfaces” on page 233.
Use only the predefined names for cursors and statements. When you associate a
cursor name with a statement name in a DECLARE CURSOR statement, the cursor
name and the statement must have the same number. For example, if you declare
cursor c1, you need to declare it for statement s1:
EXECSQL ’DECLARE C1 CURSOR FOR S1’
A REXX host variable can be a simple or compound variable. DB2 REXX Language
Support evaluates compound variables before DB2 processes SQL statements that
contain the variables. In the following example, the host variable that is passed to
DB2 is :x.1.2:
a=1
b=2
EXECSQL ’OPEN C1 USING :x.a.b’
When you assign input data to a DB2 table column, you can either let DB2
determine the type that your input data represents, or you can use an SQLDA to tell
DB2 the intended type of the input data.
are listed for each data type are the value for a column that does not accept null
values and the value for a column that accepts null values.
If you do not assign a value to a host variable before you assign the host variable
to a column, DB2 returns an error code.
Table 23. SQL input data types and REXX data formats
SQL data type SQLTYPE for
assigned by DB2 data type REXX input data format
INTEGER 496/497 A string of numerics that does not contain a decimal point or exponent
identifier. The first character can be a plus (+) or minus (−) sign. The
number that is represented must be between -2147483647 and
2147483647, inclusive.
DECIMAL(p,s) 484/485 One of the following formats:
v A string of numerics that contains a decimal point but no exponent
identifier. p represents the precision and s represents the scale of the
decimal number that the string represents. The first character can be a
plus (+) or minus (−) sign.
v A string of numerics that does not contain a decimal point or an exponent
identifier. The first character can be a plus (+) or minus (−) sign. The
number that is represented is less than -2147483647 or greater than
2147483647.
FLOAT 480/481 A string that represents a number in scientific notation. The string consists
of a series of numerics followed by an exponent identifier (an E or e
followed by an optional plus (+) or minus (−) sign and a series of numerics).
The string can begin with a plus (+) or minus (−) sign.
VARCHAR(n) 448/449 One of the following formats:
v A string of length n, enclosed in single or double quotation marks.
v The character X or x, followed by a string enclosed in single or double
quotation marks. The string within the quotation marks has a length of
2*n bytes and is the hexadecimal representation of a string of n
characters.
v A string of length n that does not have a numeric or graphic format, and
does not satisfy either of the previous conditions.
VARGRAPHIC(n) 464/465 One of the following formats:
v The character G, g, N, or n, followed by a string enclosed in single or
double quotation marks. The string within the quotation marks begins with
a shift-out character (X'0E') and ends with a shift-in character (X'0F').
Between the shift-out character and shift-in character are n double-byte
characters.
v The characters GX, Gx, gX, or gx, followed by a string enclosed in single
or double quotation marks. The string within the quotation marks has a
length of 4*n bytes and is the hexadecimal representation of a string of n
double-byte characters.
For example, when DB2 executes the following statements to update the MIDINIT
column of the EMP table, DB2 must determine a data type for HVMIDINIT:
SQLSTMT="UPDATE EMP" ,
"SET MIDINIT = ?" ,
"WHERE EMPNO = ’000200’"
"EXECSQL PREPARE S100 FROM :SQLSTMT"
HVMIDINIT=’H’
"EXECSQL EXECUTE S100 USING" ,
":HVMIDINIT"
Because the data that is assigned to HVMIDINIT has a format that fits a character
data type, DB2 REXX Language Support assigns a VARCHAR type to the input
data.
Enclosing the string in apostrophes is not adequate because REXX removes the
apostrophes when it assigns a literal to a variable. For example, suppose that you
want to pass the value in host variable stringvar to DB2. The value that you want to
pass is the string ’100’. The first thing that you need to do is to assign the string to
the host variable. You might write a REXX command like this:
stringvar = ’100’
After the command executes, stringvar contains the characters 100 (without the
apostrophes). DB2 REXX Language Support then passes the numeric value 100 to
DB2, which is not what you intended.
In this case, REXX assigns the string ’100’ to stringvar, including the single
quotation marks. DB2 REXX Language Support then passes the string ’100’ to DB2,
which is the desired result.
Example: Specifying CHAR: Suppose you want to tell DB2 that the data with
which you update the MIDINIT column of the EMP table is of type CHAR, rather
than VARCHAR. You need to set up an SQLDA that contains a description of a
CHAR column, and then prepare and execute the UPDATE statement using that
SQLDA:
INSQLDA.SQLD = 1 /* SQLDA contains one variable */
INSQLDA.1.SQLTYPE = 453 /* Type of the variable is CHAR, */
/* and the value can be null */
INSQLDA.1.SQLLEN = 1 /* Length of the variable is 1 */
INSQLDA.1.SQLDATA = ’H’ /* Value in variable is H */
INSQLDA.1.SQLIND = 0 /* Input variable is not null */
SQLSTMT="UPDATE EMP" ,
"SET MIDINIT = ?" ,
"WHERE EMPNO = ’000200’"
"EXECSQL PREPARE S100 FROM :SQLSTMT"
"EXECSQL EXECUTE S100 USING DESCRIPTOR :INSQLDA"
Example: Specifying DECIMAL with precision and scale: Suppose you want to
tell DB2 that the data is of type DECIMAL with precision and nonzero scale. You
need to set up an SQLDA that contains a description of a DECIMAL column:
Because you cannot use the SELECT INTO statement in a REXX procedure, to
retrieve data from a DB2 table you must prepare a SELECT statement, open a
cursor for the prepared statement, and then fetch rows into host variables or an
SQLDA using the cursor. The following example demonstrates how you can retrieve
data from a DB2 table using an SQLDA:
SQLSTMT= ,
’SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME,’ ,
’ WORKDEPT, PHONENO, HIREDATE, JOB,’ ,
’ EDLEVEL, SEX, BIRTHDATE, SALARY,’ ,
’ BONUS, COMM’ ,
’ FROM EMP’
EXECSQL DECLARE C1 CURSOR FOR S1
EXECSQL PREPARE S1 INTO :OUTSQLDA FROM :SQLSTMT
EXECSQL OPEN C1
Do Until(SQLCODE ¬= 0)
EXECSQL FETCH C1 USING DESCRIPTOR :OUTSQLDA
If SQLCODE = 0 Then Do
Line = ’’
Do I = 1 To OUTSQLDA.SQLD
Line = Line OUTSQLDA.I.SQLDATA
End I
Say Line
End
End
The way that you use indicator variables for input host variables in REXX
procedures is slightly different from the way that you use indicator variables in other
languages. When you want to pass a null value to a DB2 column, in addition to
putting a negative value in an indicator variable, you also need to put a valid value
in the corresponding host variable. For example, to set a value of WORKDEPT in
table EMP to null, use statements like these:
SQLSTMT="UPDATE EMP" ,
"SET WORKDEPT = ?"
HVWORKDEPT=’000’
INDWORKDEPT=-1
"EXECSQL PREPARE S100 FROM :SQLSTMT"
"EXECSQL EXECUTE S100 USING :HVWORKDEPT :INDWORKDEPT"
After you retrieve data from a column that can contain null values, you should
always check the indicator variable that corresponds to the output host variable for
that column. If the indicator variable value is negative, the retrieved value is null, so
you can disregard the value in the host variable.
In the following program, the phone number for employee Haas is selected into
variable HVPhone. After the SELECT statement executes, if no phone number for
employee Haas is found, indicator variable INDPhone contains -1.
’SUBCOM DSNREXX’
IF RC THEN ,
S_RC = RXSUBCOM(’ADD’,’DSNREXX’,’DSNREXX’)
ADDRESS DSNREXX
’CONNECT’ ’DSN’
SQLSTMT = ,
"SELECT PHONENO FROM DSN8810.EMP WHERE LASTNAME=’HAAS’"
"EXECSQL DECLARE C1 CURSOR FOR S1"
"EXECSQL PREPARE S1 FROM :SQLSTMT"
Say "SQLCODE from PREPARE is "SQLCODE
"EXECSQL OPEN C1"
Say "SQLCODE from OPEN is "SQLCODE
"EXECSQL FETCH C1 INTO :HVPhone :INDPhone"
Say "SQLCODE from FETCH is "SQLCODE
If INDPhone < 0 Then ,
Say ’Phone number for Haas is null.’
"EXECSQL CLOSE C1"
Say "SQLCODE from CLOSE is "SQLCODE
S_RC = RXSUBCOM(’DELETE’,’DSNREXX’,’DSNREXX’)
To change the isolation level for SQL statements in a REXX procedure, execute the
SET CURRENT PACKAGESET statement to select the package with the isolation
level you need. For example, to change the isolation level to cursor stability,
execute this SQL statement:
"EXECSQL SET CURRENT PACKAGESET=’DSNREXCS’"
Constraints are rules that limit the values that you can insert, delete, or update in a
table. There are two types of constraints:
v Check constraints determine the values that a column can contain. Check
constraints are discussed in “Using check constraints.”
v Referential constraints preserve relationships between tables. Referential
constraints are discussed in “Using referential constraints” on page 245.
Triggers are a series of actions that are invoked when a table is updated. Triggers
are discussed in Chapter 12, “Using triggers for active data,” on page 261.
For example, you might want to make sure that no salary can be below 15000
dollars. To do this, you can create the following check constraint:
CREATE TABLE EMPSAL
(ID INTEGER NOT NULL,
SALARY INTEGER CHECK (SALARY >= 15000));
Using check constraints makes your programming task easier, because you do not
need to enforce those constraints within application programs or with a validation
routine. Define check constraints on one or more columns in a table when that table
is created or altered.
A check constraint is not checked for consistency with other types of constraints.
For example, a column in a dependent table can have a referential constraint with a
delete rule of SET NULL. You can also define a check constraint that prohibits nulls
in the column. As a result, an attempt to delete a parent row fails, because setting
the dependent row to null violates the check constraint.
Similarly, a check constraint is not checked for consistency with a validation routine,
which is applied to a table before a check constraint. If the routine requires a
column to be greater than or equal to 10 and a check constraint requires the same
column to be less than 10, table inserts are not possible. Plans and packages do
not need to be rebound after check constraints are defined on or removed from a
table.
Any constraint defined on columns of a base table applies to the views defined on
that base table.
When you use ALTER TABLE to add a check constraint to already populated tables,
the enforcement of the check constraint is determined by the value of the
CURRENT RULES special register as follows:
v If the value is STD, the check constraint is enforced immediately when it is
defined. If a row does not conform, the check constraint is not added to the table
and an error occurs.
v If the value is DB2, the check constraint is added to the table description but its
enforcement is deferred. Because there might be rows in the table that violate
the check constraint, the table is placed in check pending status.
Figure 114 shows the relationships that exist among the tables in the sample
application. Arrows point from parent tables to dependent tables.
CASCADE
DEPT
SET SET
NULL NULL
RESTRICT EMP
RESTRICT
CASCADE ACT
PROJ
RESTRICT RESTRICT
PROJACT
RESTRICT
RESTRICT
EMPPROJACT
When a table refers to an entity for which there is a master list, it should identify an
occurrence of the entity that actually appears in the master list; otherwise, either the
reference is invalid or the master list is incomplete. Referential constraints enforce
the relationship between a table and a master list.
In some cases, using a timestamp as part of the key can be helpful, for example
when a table does not have a “natural” unique key or if arrival sequence is the key.
Table 25 shows part of the project table which has the primary key column,
PROJNO.
Table 25. Part of the project table with the primary key column, PROJNO
PROJNO PROJNAME DEPTNO
Table 26 shows part of the project activity table, which has a primary key that
contains more than one column. The primary key is a composite key, which
consists of the PRONNO, ACTNO, and ACSTDATE columns.
Table 26. Part of the Project activities table with a composite primary key
PROJNO ACTNO ACSTAFF ACSTDATE ACENDATE
Another way to allow only unique values in a column is to create a table using the
UNIQUE clause of the CREATE TABLE or ALTER TABLE statement. Like the
PRIMARY KEY clause, specifying a UNIQUE clause prevents use of the table until
you create an index to enforce the uniqueness of the key. If you use the UNIQUE
clause in an ALTER TABLE statement, a unique index must already exist. For more
information about the UNIQUE clause, see Chapter 5 of DB2 SQL Reference.
A table can have no more than one primary key. A primary key obeys the same
restrictions as do index keys:
v The key can include no more than 64 columns.
v No column can be named twice.
You define a list of columns as the primary key of a table with the PRIMARY KEY
clause in the CREATE TABLE statement.
To add a primary key to an existing table, use the PRIMARY KEY clause in an
ALTER TABLE statement. In this case, a unique index must already exist.
Incomplete definition
If a table is created with a primary key, its primary index is the first unique index
created on its primary key columns, with the same order of columns as the primary
key columns. The columns of the primary index can be in either ascending or
descending order. The table has an incomplete definition until you create an index
on the parent key. This incomplete definition status is recorded as a P in the
TABLESTATUS column of SYSIBM.SYSTABLES. Use of a table with an incomplete
definition is severely restricted: you can drop the table, create the primary index,
and drop or create other indexes; you cannot load the table, insert data, retrieve
data, update data, delete data, or create foreign keys that reference the primary
key.
Because of these restrictions, plan to create the primary index soon after creating
the table. For example, to create the primary index for the project activity table,
issue:
CREATE UNIQUE INDEX XPROJAC1
ON DSN8810.PROJACT (PROJNO, ACTNO, ACSTDATE);
Creating the primary index resets the incomplete definition status and its associated
restrictions. But if you drop the primary index, it reverts to incomplete definition
status; to reset the status, you must create the primary index or alter the table to
drop the primary key.
If the primary key is added later with ALTER TABLE, a unique index on the key
columns must already exist. If more than one unique index is on those columns,
DB2 chooses one arbitrarily to be the primary index.
A foreign key can refer to either a unique or a primary key of the parent table. If the
foreign key refers to a non-primary unique key, you must specify the column names
of the key explicitly. If the column names of the key are not specified explicitly, the
default is to refer to the column names of the primary key of the parent table.
The column names you specify identify the columns of the parent key. The privilege
set must include the ALTER or the REFERENCES privilege on the columns of the
parent key. A unique index must exist on the parent key columns of the parent
table.
The name is used in error messages, queries to the catalog, and DROP FOREIGN
KEY statements. Hence, you might want to choose one if you are experimenting
with your database design and have more than one foreign key beginning with the
same column (otherwise DB2 generates the name).
You can create an index on the columns of a foreign key in the same way you
create one on any other set of columns. Most often it is not a unique index. If you
do create a unique index on a foreign key, it introduces an additional constraint on
the values of the columns.
To let an index on the foreign key be used on the dependent table for a delete
operation on a parent table, the columns of the index on the foreign key must be
identical to and in the same order as the columns in the foreign key.
The primary key can share columns of the foreign key if the first n columns of the
foreign key are the same as the primary key’s columns. Again, the primary index
serves as an index on the foreign key. In the sample project activity table, the
primary index (on PROJNO, ACTNO, ACSTDATE) serves as an index on the
foreign key on PROJNO. It does not serve as an index on the foreign key on
ACTNO, because ACTNO is not the first column of the index.
When a foreign key is added to a populated table, the table space is put into check
pending status.
DB2 does not allow you to create a cycle in which a delete operation on a table
involves that same table. Enforcing that principle creates rules about adding a
foreign key to a table:
v In a cycle of two tables, neither delete rule can be CASCADE.
v In a cycle of more than two tables, two or more delete rules must not be
CASCADE. For example, in a cycle with three tables, two of the delete rules
| must be other than CASCADE. This concept is illustrated in Figure 116. The
| cycle on the left is valid because two or more of the delete rules are not
| CASCADE. The cycle on the right is invalid because it contains two cascading
| deletes.
Valid Invalid
cycle cycle
TABLE1 TABLE1
| Refer to Part 3 (Volume 1) of DB2 Administration Guide for more information about
| multilevel security with row-level granularity.
| You should use this type of referential constraint only when an application process
| verifies the data in a referential integrity relationship. For example, when inserting a
| row in a dependent table, the application should verify that a foreign key exists as a
| primary or unique key in the parent table. To define an informational referential
| You can use a ROWID column to write queries that navigate directly to a row, which
| can be useful in situations where high performance is a requirement. This direct
| navigation, without using an index or scanning the table space, is called direct row
| access. In addition, a ROWID column is a requirement for tables that contain LOB
| columns. This section discusses the use of a ROWID column in direct row access.
| For DB2 to be able to use direct row access for the update operation, the SELECT
| from INSERT statement and the UPDATE statement must execute within the same
| unit of work. If these statements execute in different units of work, the ROWID
| value for the inserted row might change due to a REORG of the table space before
| the update operation. For more information about predicates and direct row access,
| see “Is direct row access possible? (PRIMARY_ACCESSTYPE = D)” on page 739.
| The values that DB2 generates for an identity column depend on how the column is
| defined. The START WITH parameter determines the first value that DB2
| generates. The values advance by the INCREMENT BY parameter in ascending or
| descending order.
| The MINVALUE and MAXVALUE parameters determine the minimum and maximum
| values that DB2 generates. The CYCLE or NO CYCLE parameter determines
| whether DB2 wraps values when it has generated all values between the START
| WITH value and MAXVALUE if the values are ascending, or between the START
| WITH value and MINVALUE if the values are descending.
| Now suppose that you execute the following INSERT statement eight times:
| INSERT INTO T1 (CHARCOL1) VALUES (’A’);
| When DB2 generates values for IDENTCOL1, it starts with -1 and increments by 1
| until it reaches the MAXVALUE of 3 on the fifth INSERT. To generate the value for
| the sixth INSERT, DB2 cycles back to MINVALUE, which is -3. T1 looks like this
| after the eight INSERTs are executed:
| CHARCOL1 IDENTCOL1
| ======== =========
| A -1
| A 0
| A 1
| A 2
| The value of IDENTCOL1 for the eighth INSERT repeats the value of IDENTCOL1
| for the first INSERT.
| In addition, you can use the IDENTITY_VAL_LOCAL function to return the most
| recently assigned value for an identity column that was generated by an INSERT
| with a VALUES clause within the current processing level. (A new level is initiated
| when a trigger, function, or stored procedure is invoked.)
| Example: Using SELECT from INSERT: Suppose that an EMPLOYEE table and a
| DEPARTMENT table are defined in the following way:
| CREATE TABLE EMPLOYEE
| (EMPNO INTEGER GENERATED ALWAYS AS IDENTITY
| PRIMARY KEY NOT NULL,
| NAME CHAR(30) NOT NULL,
| SALARY DECIMAL(7,2) NOT NULL,
| WORKDEPT SMALLINT);
|
| CREATE TABLE DEPARTMENT
| (DEPTNO SMALLINT NOT NULL PRIMARY KEY,
| DEPTNAME VARCHAR(30),
| MGRNO INTEGER NOT NULL,
| CONSTRAINT REF_EMPNO FOREIGN KEY (MGRNO)
| REFERENCES EMPLOYEE (EMPNO) ON DELETE RESTRICT);
|
| ALTER TABLE EMPLOYEE ADD
| CONSTRAINT REF_DEPTNO FOREIGN KEY (WORKDEPT)
| REFERENCES DEPARTMENT (DEPTNO) ON DELETE SET NULL;
| When you insert a new employee into the EMPLOYEE table, to retrieve the value
| for the EMPNO column, you can use the following SELECT from INSERT
| statement:
| EXEC SQL
| SELECT EMPNO INTO :hv_empno
| FROM FINAL TABLE (INSERT INTO EMPLOYEE (NAME, SALARY, WORKDEPT)
| VALUES (’New Employee’, 75000.00, 11));
| The SELECT statement returns the DB2-generated identity value for the EMPNO
| column in the host variable :hv_empno.
| You can then use the value in :hv_empno to update the MGRNO column in the
| DEPARTMENT table with the new employee as the department manager:
| EXEC SQL
| UPDATE DEPARTMENT
| SET MGRNO = :hv_empno
| WHERE DEPTNO = 11;
| Your application can reference a sequence object and coordinate the value as keys
| across multiple rows and tables. However, a table column that gets its values from
| a sequence object does not necessarily have unique values in that column. Even if
| the sequence object has been defined with the NO CYCLE clause, some other
| application might insert values into that table column other than values you obtain
| by referencing that sequence object.
| The values that DB2 generates for a sequence depend on how the sequence is
| created. The START WITH parameter determines the first value that DB2
| generates. The values advance by the INCREMENT BY parameter in ascending or
| descending order.
| You create a sequence named ORDER_SEQ to use as key values for both the
| ORDERS and ORDER_ITEMS tables:
| CREATE SEQUENCE ORDER_SEQ AS INTEGER
| START WITH 1
| INCREMENT BY 1
| NO MAXVALUE
| NO CYCLE
| CACHE 20;
| The NEXT VALUE expression in the first INSERT statement generates a sequence
| number value for the sequence object ORDER_SEQ. The PREVIOUS VALUE
| expression in the second INSERT statement retrieves that same value because it
| was the sequence number most recently generated for that sequence object within
| the current application process.
For example, a constraint can disallow an update to the salary column of the
employee table if the new value is over a certain amount. A trigger can monitor the
amount by which the salary changes, as well as the salary value. If the change is
above a certain amount, the trigger might substitute a valid value and call a
user-defined function to send a notice to an administrator about the invalid update.
Triggers also move application logic intoDB2, which can result in faster application
development and easier maintenance. For example, you can write applications to
control salary changes in the employee table, but each application program that
changes the salary column must include logic to check those changes. A better
method is to define a trigger that controls changes to the salary column. Then DB2
does the checking for any application that modifies salaries.
You create triggers using the CREATE TRIGGER statement. Figure 117 on page
262 shows an example of a CREATE TRIGGER statement.
When you execute this CREATE TRIGGER statement, DB2 creates a trigger
package called REORDER and associates the trigger package with table PARTS.
DB2 records the timestamp when it creates the trigger. If you define other triggers
on the PARTS table, DB2 uses this timestamp to determine which trigger to activate
first. The trigger is now ready to use.
When you no longer want to use trigger REORDER, you can delete the trigger by
executing the statement:
DROP TRIGGER REORDER;
Executing this statement drops trigger REORDER and its associated trigger
package named REORDER.
If you drop table PARTS, DB2 also drops trigger REORDER and its trigger
package.
Trigger name
Use an ordinary identifier to name your trigger. You can use a qualifier or let DB2
determine the qualifier. When DB2 creates a trigger package for the trigger, it uses
the qualifier for the collection ID of the trigger package. DB2 uses these rules to
determine the qualifier:
v If you use static SQL to execute the CREATE TRIGGER statement, DB2 uses
the authorization ID in the bind option QUALIFIER for the plan or package that
contains the CREATE TRIGGER statement. If the bind command does not
include the QUALIFIER option, DB2 uses the owner of the package or plan.
v If you use dynamic SQL to execute the CREATE TRIGGER statement, DB2 uses
the authorization ID in special register CURRENT SQLID.
Subject table
When you perform an insert, update, or delete operation on this table, the trigger is
activated. You must name a local table in the CREATE TRIGGER statement. You
cannot define a trigger on a catalog table or on a view.
Triggering event
Every trigger is associated with an event. A trigger is activated when the triggering
event occurs in the subject table. The triggering event is one of the following SQL
operations:
v INSERT
v UPDATE
v DELETE
A triggering event can also be an update or delete operation that occurs as the
result of a referential constraint with ON DELETE SET NULL or ON DELETE
CASCADE.
When the triggering event for a trigger is an update operation, the trigger is called
an update trigger. Similarly, triggers for insert operations are called insert triggers,
and triggers for delete operations are called delete triggers.
The SQL statement that performs the triggering SQL operation is called the
triggering SQL statement. Each triggering event is associated with one subject table
and one SQL operation.
If the triggering SQL operation is an update operation, the event can be associated
with specific columns of the subject table. In this case, the trigger is activated only if
the update operation updates any of the specified columns.
Granularity
The triggering SQL statement might modify multiple rows in the table. The
granularity of the trigger determines whether the trigger is activated only once for
the triggering SQL statement or once for every row that the SQL statement
modifies. The granularity values are:
v FOR EACH ROW
The trigger is activated once for each row that DB2 modifies in the subject table.
If the triggering SQL statement modifies no rows, the trigger is not activated.
However, if the triggering SQL statement updates a value in a row to the same
value, the trigger is activated. For example, if an UPDATE trigger is defined on
table COMPANY_STATS, the following SQL statement will activate the trigger.
UPDATE COMPANY_STATS SET NBEMP = NBEMP;
v FOR EACH STATEMENT
The trigger is activated once when the triggering SQL statement executes. The
trigger is activated even if the triggering SQL statement modifies no rows.
Triggers with a granularity of FOR EACH ROW are known as row triggers. Triggers
with a granularity of FOR EACH STATEMENT are known as statement triggers.
Statement triggers can only be after triggers.
Trigger NEW_HIRE is activated once for every row inserted into the employee
table.
Transition variables
When you code a row trigger, you might need to refer to the values of columns in
each updated row of the subject table. To do this, specify transition variables in the
REFERENCING clause of your CREATE TRIGGER statement. The two types of
transition variables are:
v Old transition variables, specified with the OLD transition-variable clause, capture
the values of columns before the triggering SQL statement updates them. You
can define old transition variables for update and delete triggers.
v New transition variables, specified with the NEW transition-variable clause,
capture the values of columns after the triggering SQL statement updates them.
You can define new transition variables for update and insert triggers.
Suppose that you have created tables T and S, with the following definitions:
CREATE TABLE T
(ID SMALLINT GENERATED BY DEFAULT AS IDENTITY (START WITH 100),
C2 SMALLINT,
C3 SMALLINT,
C4 SMALLINT);
CREATE TABLE S
(ID SMALLINT GENERATED ALWAYS AS IDENTITY,
C1 SMALLINT);
This statement inserts a row into S with a value of 5 for column C1 and a value of 1
for identity column ID. Next, suppose that you execute the following SQL statement,
which activates trigger TR1:
INSERT INTO T (C2)
VALUES (IDENTITY_VAL_LOCAL());
Transition tables
If you want to refer to the entire set of rows that a triggering SQL statement
modifies, rather than to individual rows, use a transition table. Like transition
variables, transition tables can appear in the REFERENCING clause of a CREATE
TRIGGER statement. Transition tables are valid for both row triggers and statement
triggers. The two types of transition tables are:
v Old transition tables, specified with the OLD TABLE transition-table-name clause,
capture the values of columns before the triggering SQL statement updates them.
You can define old transition tables for update and delete triggers.
v New transition tables, specified with the NEW TABLE transition-table-name
clause, capture the values of columns after the triggering SQL statement updates
them. You can define new transition variables for update and insert triggers.
The scope of old and new transition table names is the trigger body. If another table
exists that has the same name as a transition table, any unqualified reference to
that name in the trigger body points to the transition table. To reference the other
table in the trigger body, you must use the fully qualified table name.
The following example uses a new transition table to capture the set of rows that
are inserted into the INVOICE table:
CREATE TRIGGER LRG_ORDR
AFTER INSERT ON INVOICE
REFERENCING NEW TABLE AS N_TABLE
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
SELECT LARGE_ORDER_ALERT(CUST_NO,
TOTAL_PRICE, DELIVERY_DATE)
FROM N_TABLE WHERE TOTAL_PRICE > 10000;
END
Trigger condition
If you want the triggered action to occur only when certain conditions are true, code
a trigger condition. A trigger condition is similar to a predicate in a SELECT, except
that the trigger condition begins with WHEN, rather than WHERE. If you do not
include a trigger condition in your triggered action, the trigger body executes every
time the trigger is activated.
For a row trigger, DB2 evaluates the trigger condition once for each modified row of
the subject table. For a statement trigger, DB2 evaluates the trigger condition once
for each execution of the triggering SQL statement.
If the trigger condition of a before trigger has a fullselect, the fullselect cannot
reference the subject table.
The following example shows a trigger condition that causes the trigger body to
execute only when the number of ordered items is greater than the number of
available items:
CREATE TRIGGER CK_AVAIL
NO CASCADE BEFORE INSERT ON ORDERS
REFERENCING NEW AS NEW_ORDER
FOR EACH ROW MODE DB2SQL
WHEN (NEW_ORDER.QUANTITY >
(SELECT ON_HAND FROM PARTS
WHERE NEW_ORDER.PARTNO=PARTS.PARTNO))
BEGIN ATOMIC
VALUES(ORDER_ERROR(NEW_ORDER.PARTNO,
NEW_ORDER.QUANTITY));
END
Trigger body
In the trigger body, you code the SQL statements that you want to execute
whenever the trigger condition is true. If the trigger body consists of more than one
statement, it must begin with BEGIN ATOMIC and end with END. You cannot
include host variables or parameter markers in your trigger body. If the trigger body
contains a WHERE clause that references transition variables, the comparison
operator cannot be LIKE.
The statements you can use in a trigger body depend on the activation time of the
trigger. Table 27 summarizes which SQL statements you can use in which types of
triggers.
Table 27. Valid SQL statements for triggers and trigger activation times
SQL statement Valid before activation time Valid after activation time
fullselect Yes Yes
CALL Yes Yes
SIGNAL SQLSTATE Yes Yes
VALUES Yes Yes
SET transition-variable Yes No
INSERT No Yes
DELETE (searched) No Yes
The following list provides more detailed information about SQL statements that are
valid in triggers:
v fullselect, CALL, and VALUES
Use a fullselect or the VALUES statement in a trigger body to conditionally or
unconditionally invoke a user-defined function. Use the CALL statement to invoke
a stored procedure. See “Invoking stored procedures and user-defined functions
from triggers” on page 269 for more information on invoking user-defined
functions and stored procedures from triggers.
A fullselect in the trigger body of a before trigger cannot reference the subject
table.
v SIGNAL SQLSTATE
Use the SIGNAL SQLSTATE statement in the trigger body to report an error
condition and back out any changes that are made by the trigger, as well as
actions that result from referential constraints on the subject table. When DB2
executes the SIGNAL SQLSTATE statement, it returns an SQLCA to the
application with SQLCODE -438. The SQLCA also includes the following values,
which you supply in the SIGNAL SQLSTATE statement:
– A 5-character value that DB2 uses as the SQLSTATE
– An error message that DB2 places in the SQLERRMC field
In the following example, the SIGNAL SQLSTATE statement causes DB2 to
return an SQLCA with SQLSTATE 75001 and terminate the salary update
operation if an employee’s salary increase is over 20%:
CREATE TRIGGER SAL_ADJ
BEFORE UPDATE OF SALARY ON EMP
REFERENCING OLD AS OLD_EMP
NEW AS NEW_EMP
FOR EACH ROW MODE DB2SQL
WHEN (NEW_EMP.SALARY > (OLD_EMP.SALARY * 1.20))
BEGIN ATOMIC
SIGNAL SQLSTATE ’75001’
(’Invalid Salary Increase - Exceeds 20%’);
END
v SET transition-variable
Because before triggers operate on rows of a table before those rows are
modified, you cannot perform operations in the body of a before trigger that
directly modify the subject table. You can, however, use the SET
transition-variable statement to modify the values in a row before those values go
into the table. For example, this trigger uses a new transition variable to fill in
today’s date for the new employee’s hire date:
CREATE TRIGGER HIREDATE
NO CASCADE BEFORE INSERT ON EMP
REFERENCING NEW AS NEW_VAR
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
SET NEW_VAR.HIRE_DATE = CURRENT_DATE;
END
v INSERT, DELETE (searched), and UPDATE (searched)
If any SQL statement in the trigger body fails during trigger execution, DB2 rolls
back all changes that are made by the triggering SQL statement and the triggered
SQL statements. However, if the trigger body executes actions that are outside of
DB2’s control or are not under the same commit coordination as the DB2
subsystem in which the trigger executes, DB2 cannot undo those actions. Examples
of external actions that are not under DB2’s control are:
v Performing updates that are not under RRS commit control
v Sending an electronic mail message
If the trigger executes external actions that are under the same commit coordination
as the DB2 subsystem under which the trigger executes, and an error occurs during
trigger execution, DB2 places the application process that issued the triggering
statement in a must-rollback state. The application must then execute a rollback
operation to roll back those external actions. Examples of external actions that are
under the same commit coordination as the triggering SQL operation are:
v Executing a distributed update operation
v From a user-defined function or stored procedure, executing an external action
that affects an external resource manager that is under RRS commit control.
Because a before trigger must not modify any table, functions and procedures that
you invoke from a trigger cannot include INSERT, UPDATE, or DELETE statements
that modify the subject table.
Use the VALUES statement to execute a function unconditionally; that is, once for
each execution of a statement trigger or once for each row in a row trigger. In this
Most of the code for using a table locator is in the function or stored procedure that
receives the locator. “Accessing transition tables in a user-defined function or stored
procedure” on page 328 explains how a function defines a table locator and uses it
to receive a transition table. To pass the transition table from a trigger, specify the
parameter TABLE transition-table-name when you invoke the function or stored
procedure. This causes DB2 to pass a table locator for the transition table to the
user-defined function or stored procedure. For example, this trigger passes a table
locator for a transition table NEWEMPS to stored procedure CHECKEMP:
CREATE TRIGGER EMPRAISE
AFTER UPDATE ON EMP
REFERENCING NEW TABLE AS NEWEMPS
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
CALL CHECKEMP(TABLE NEWEMPS);
END
Trigger cascading
An SQL operation that a trigger performs might modify the subject table or other
tables with triggers, so DB2 also activates those triggers. A trigger that is activated
as the result of another trigger can be activated at the same level as the original
trigger or at a different level. Two triggers, A and B, are activated at different levels
if trigger B is activated after trigger A is activated and completes before trigger A
completes. If trigger B is activated after trigger A is activated and completes after
trigger A completes, then the triggers are at the same level.
For example, in these cases, trigger A and trigger B are activated at the same level:
v Table X has two triggers that are defined on it, A and B. A is a before trigger and
B is an after trigger. An update to table X causes both trigger A and trigger B to
activate.
v Trigger A updates table X, which has a referential constraint with table Y, which
has trigger B defined on it. The referential constraint causes table Y to be
updated, which activates trigger B.
In these cases, trigger A and trigger B are activated at different levels:
When triggers are activated at different levels, it is called trigger cascading. Trigger
cascading can occur only for after triggers because DB2 does not support
cascading of before triggers.
To prevent the possibility of endless trigger cascading, DB2 supports only 16 levels
of cascading of triggers, stored procedures, and user-defined functions. If a trigger,
user-defined function, or stored procedure at the 17th level is activated, DB2 returns
SQLCODE -724 and backs out all SQL changes in the 16 levels of cascading.
However, as with any other SQL error that occurs during trigger execution, if any
action occurs that is outside the control of DB2, that action is not backed out.
You can write a monitor program that issues IFI READS requests to collect DB2
trace information about the levels of cascading of triggers, user-defined functions,
and stored procedures in your programs. See Appendixes (Volume 2) of DB2
Administration Guide for information on how to write a monitor program.
DB2 always activates all before triggers that are defined on a table before the after
triggers that are defined on that table, but within the set of before triggers, the
activation order is by timestamp, and within the set of after triggers, the activation
order is by timestamp.
In this example, triggers NEWHIRE1 and NEWHIRE2 have the same triggering
event (INSERT), the same subject table (EMP), and the same activation time
(AFTER). Suppose that the CREATE TRIGGER statement for NEWHIRE1 is run
before the CREATE TRIGGER statement for NEWHIRE2:
CREATE TRIGGER NEWHIRE1
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END
If two row triggers are defined for the same action, the trigger that was created
earlier is activated first for all affected rows. Then the second trigger is activated for
all affected rows. In the previous example, suppose that an INSERT statement with
a fullselect inserts 10 rows into table EMP. NEWHIRE1 is activated for all 10 rows,
then NEWHIRE2 is activated for all 10 rows.
In general, the following steps occur when triggering SQL statement S1 performs an
insert, update, or delete operation on table T1:
1. DB2 determines the rows of T1 to modify. Call that set of rows M1. The
contents of M1 depend on the SQL operation:
v For a delete operation, all rows that satisfy the search condition of the
statement for a searched delete operation, or the current row for a positioned
delete operation
v For an insert operation, the row identified by the VALUES statement, or the
rows identified by the result table of a SELECT clause within the INSERT
statement
v For an update operation, all rows that satisfy the search condition of the
statement for a searched update operation, or the current row for a
positioned update operation
2. DB2 processes all before triggers that are defined on T1, in order of creation.
Each before trigger executes the triggered action once for each row in M1. If M1
is empty, the triggered action does not execute.
If an error occurs when the triggered action executes, DB2 rolls back all
changes that are made by S1.
3. DB2 makes the changes that are specified in statement S1 to table T1.
If an error occurs, DB2 rolls back all changes that are made by S1.
4. If M1 is not empty, DB2 applies all the following constraints and checks that are
defined on table T1:
v Referential constraints
v Check constraints
v Checks that are due to updates of the table through views defined WITH
CHECK OPTION
Application of referential constraints with rules of DELETE CASCADE or
DELETE SET NULL are activated before delete triggers or before update
triggers on the dependent tables.
If any constraint is violated, DB2 rolls back all changes that are made by
constraint actions or by statement S1.
5. DB2 processes all after triggers that are defined on T1, and all after triggers on
tables that are modified as the result of referential constraint actions, in order of
creation.
If any triggered actions contain SQL insert, update, or delete operations, DB2
repeats steps 1 through 5 for each operation.
For example, table DEPT is a parent table of EMP, with these conditions:
v The DEPTNO column of DEPT is the primary key.
v The WORKDEPT column of EMP is the foreign key.
v The constraint is ON DELETE SET NULL.
Suppose the following trigger is defined on EMP:
CREATE TRIGGER EMPRAISE
AFTER UPDATE ON EMP
REFERENCING NEW TABLE AS NEWEMPS
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES(CHECKEMP(TABLE NEWEMPS));
END
Also suppose that an SQL statement deletes the row with department number E21
from DEPT. Because of the constraint, DB2 finds the rows in EMP with a
WORKDEPT value of E21 and sets WORKDEPT in those rows to null. This is
equivalent to an update operation on EMP, which has update trigger EMPRAISE.
Therefore, because EMPRAISE is an after trigger, EMPRAISE is activated after the
constraint action sets WORKDEPT values to null.
| If the ID you are using does not have write-down privilege and you execute an
| INSERT or UPDATE statement, the security label value of your ID is assigned to
| the security label column for the rows that you are inserting or updating.
| When a BEFORE trigger is activated, the value of the transition variable that
| corresponds to the security label column is the security label of the ID if either of
| the following conditions is true:
| v The user does not have write-down privilege
| v The value for the security label column is not specified
When DB2 executes the FETCH statement that positions cursor C1 for the first
time, DB2 evaluates the subselect, SELECT B1 FROM T2, to produce a result table
that contains the two rows of column T2:
When DB2 executes the positioned UPDATE statement for the first time, trigger
TR1 is activated. When the body of trigger TR1 executes, the row with value 2 is
deleted from T2. However, because SELECT B1 FROM T2 is evaluated only once,
when the FETCH statement is executed again, DB2 finds the second row of T1,
even though the second row of T2 was deleted. The FETCH statement positions
the cursor to the second row of T1, and the second row of T1 is updated. The
update operation causes the trigger to be activated again, which causes DB2 to
attempt to delete the second row of T2, even though that row was already deleted.
To avoid processing of the second row after it should have been deleted, use a
correlated subquery in the cursor declaration:
DCL C1 CURSOR FOR
SELECT A1 FROM T1 X
WHERE EXISTS (SELECT B1 FROM T2 WHERE X.A1 = B1)
FOR UPDATE OF A1;
In this case, the subquery, SELECT B1 FROM T2 WHERE X.A1 = B1, is evaluated
for each FETCH statement. The first time that the FETCH statement executes, it
positions the cursor to the first row of T1. The positioned UPDATE operation
activates the trigger, which deletes the second row of T2. Therefore, when the
FETCH statement executes again, no row is selected, so no update operation or
triggered action occurs.
The contents of tables T2 and T3 after the UPDATE statement executes depend on
the order in which DB2 updates the rows of T1.
If DB2 updates the first row of T1 first, after the UPDATE statement and the trigger
execute for the first time, the values in the three tables are:
After the second row of T1 is updated, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 2 2
3 3 2
3
However, if DB2 updates the second row of T1 first, after the UPDATE statement
and the trigger execute for the first time, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
1 3 3
3
After the first row of T1 is updated, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 3 3
3 2 3
2
Introduction to LOBs
Working with LOBs involves defining the LOBs to DB2, moving the LOB data into
DB2 tables, then using SQL operations to manipulate the data. This chapter
concentrates on manipulating LOB data using SQL statements. For information on
defining LOBs to DB2, see Chapter 5 of DB2 SQL Reference. For information on
how DB2 utilities manipulate LOB data, see Part 2 of DB2 Utility Guide and
Reference.
These are the basic steps for defining LOBs and moving the data into DB2:
| 1. Define a column of the appropriate LOB type and optionally a row identifier
| (ROWID) column in a DB2 table. Define only one ROWID column, even if there
| are multiple LOB columns in the table. If you do not create a ROWID column
| before you define a LOB column, DB2 creates a hidden ROWID column and
| appends it as the last column of the table. For information about what hidden
| ROWID columns are, see the description on page 282.
| The LOB column holds information about the LOB, not the LOB data itself. The
| table that contains the LOB information is called the base table. DB2 uses the
| ROWID column to locate your LOB data. You can define the LOB column (and
| optionally the ROWID column) in a CREATE TABLE or ALTER TABLE
| statement.
| You can add both a LOB column and a ROWID column to an existing table by
| using two ALTER TABLE statements: add the ROWID column with the first
| ALTER TABLE statement and the LOB column with the second. If you add a
| LOB column first, DB2 generates a hidden ROWID column.
| If you add a ROWID column after you add a LOB column, the table has two
| ROWID columns: the implicitly-created, hidden, column and the
| explicitly-created column. In this case, DB2 ensures that the values of the two
| ROWID columns are always identical.
2. Create a table space and table to hold the LOB data.
The table space and table are called a LOB table space and an auxiliary table.
If your base table is nonpartitioned, you must create one LOB table space and
© Copyright IBM Corp. 1983, 2004 281
one auxiliary table for each LOB column. If your base table is partitioned, for
each LOB column, you must create one LOB table space and one auxiliary
table for each partition. For example, if your base table has three partitions, you
must create three LOB table spaces and three auxiliary tables for each LOB
column. Create these objects using the CREATE LOB TABLESPACE and
CREATE AUXILIARY TABLE statements.
3. Create an index on the auxiliary table.
Each auxiliary table must have exactly one index. Use CREATE INDEX for this
task.
4. Put the LOB data into DB2.
If the total length of a LOB column and the base table row is less than 32 KB,
you can use the LOAD utility to put the data in DB2. Otherwise, you must use
INSERT or UPDATE statements. Even though the data is stored in the auxiliary
table, the LOAD utility statement or INSERT statement specifies the base table.
Using INSERT can be difficult because your application needs enough storage
to hold the entire value that goes into the LOB column.
| Hidden ROWID column: If you do not create a ROWID column before you define a
| LOB column, DB2 creates a hidden ROWID column for you. A hidden ROWID
| column is not visible in the results of SELECT * statements, including those in
| DESCRIBE and CREATE VIEW statements. However, it is visible to all statements
| that refer to the column directly. DB2 assigns the GENERATED ALWAYS attribute
| and the name DB2_GENERATED_ROWID_FOR_LOBSnn to a hidden ROWID
| column. DB2 appends the identifier nn only if the column name already exists in the
| table. If so, DB2 appends 00 and increments by 1 until the name is unique within
| the row.
Example: Adding a CLOB column: Suppose that you want to add a resume for
each employee to the employee table. Employee resumes are no more than 5 MB
in size. The employee resumes contain single-byte characters, so you can define
the resumes to DB2 as CLOBs. You therefore need to add a column of data type
| CLOB with a length of 5 MB to the employee table. If you want to define a ROWID
| column explicitly, you must define it before you define the CLOB column.
Execute an ALTER TABLE statement to add the ROWID column, and then execute
another ALTER TABLE statement to add the CLOB column. Use statements like
this:
ALTER TABLE EMP
ADD ROW_ID ROWID NOT NULL GENERATED ALWAYS;
COMMIT;
ALTER TABLE EMP
ADD EMP_RESUME CLOB(5M);
COMMIT;
Next, you need to define a LOB table space and an auxiliary table to hold the
employee resumes. You also need to define an index on the auxiliary table. You
must define the LOB table space in the same database as the associated base
table. You can use statements like this:
CREATE LOB TABLESPACE RESUMETS
IN DSN8D81A
LOG NO;
COMMIT;
CREATE AUXILIARY TABLE EMP_RESUME_TAB
IN DSN8D81A.RESUMETS
STORES DSN8810.EMP
Now that your DB2 objects for the LOB data are defined, you can load your
employee resumes into DB2. To do this in an SQL application, you can define a
host variable to hold the resume, copy the resume data from a file into the host
variable, and then execute an UPDATE statement to copy the data into DB2.
Although the data goes into the auxiliary table, your UPDATE statement specifies
the name of the base table. The C language declaration of the host variable might
be:
SQL TYPE is CLOB (5M) resumedata;
In this example, employeenum is a host variable that identifies the employee who is
associated with a resume.
After your LOB data is in DB2, you can write SQL applications to manipulate the
data. You can use most SQL statements with LOBs. For example, you can use
statements like these to extract information about an employee’s department from
the resume:
EXEC SQL BEGIN DECLARE SECTION;
char employeenum[6];
long deptInfoBeginLoc;
long deptInfoEndLoc;
SQL TYPE IS CLOB_LOCATOR resume;
SQL TYPE IS CLOB_LOCATOR deptBuffer;
EXEC
. SQL END DECLARE SECTION;
.
.
EXEC SQL DECLARE C1 CURSOR FOR
. SELECT EMPNO, EMP_RESUME FROM EMP;
.
.
EXEC
. SQL FETCH C1 INTO :employeenum, :resume;
.
.
EXEC SQL SET :deptInfoBeginLoc =
POSSTR(:resumedata, ’Department Information’);
These statements use host variables of data type large object locator (LOB locator).
LOB locators let you manipulate LOB data without moving the LOB data into host
variables. By using LOB locators, you need much smaller amounts of memory for
your programs. LOB locators are discussed in “Using LOB locators to save storage”
on page 288.
For instructions on how to prepare and run the sample LOB applications, see Part 2
of DB2 Installation Guide.
You can declare LOB host variables and LOB locators in assembler, C, C++,
COBOL, Fortran, and PL/I. For each host variable or locator of SQL type BLOB,
CLOB, or DBCLOB that you declare, DB2 generates an equivalent declaration that
uses host language data types. When you refer to a LOB host variable or locator in
an SQL statement, you must use the variable you specified in the SQL type
declaration. When you refer to the host variable in a host language statement, you
must use the variable that DB2 generates. See Chapter 9, “Embedding SQL
statements in host languages,” on page 129 for the syntax of LOB declarations in
each language and for host language equivalents for each LOB type.
The following examples show you how to declare LOB host variables in each
supported language. In each table, the left column contains the declaration that you
code in your application program. The right column contains the declaration that
DB2 generates.
Notes:
1. Because assembler language allows character declarations of no more than 65535 bytes,
DB2 separates the host language declarations for BLOB and CLOB host variables that
are longer than 65535 bytes into two parts.
2. Because assembler language allows graphic declarations of no more than 65534 bytes,
DB2 separates the host language declarations for DBCLOB host variables that are longer
than 65534 bytes into two parts.
Notes:
1. Because the COBOL language allows character declarations of no more than 32767
bytes, for BLOB or CLOB host variables that are greater than 32767 bytes in length, DB2
creates multiple host language declarations of 32767 or fewer bytes.
2. Because the COBOL language allows graphic declarations of no more than 32767
double-byte characters, for DBCLOB host variables that are greater than 32767
double-byte characters in length, DB2 creates multiple host language declarations of
32767 or fewer double-byte characters.
Declarations of LOB host variables in PL/I: Table 33 shows PL/I declarations for
some typical LOB types.
Table 33. Examples of PL/I variable declarations
You declare this variable DB2 generates this variable
Notes:
1. Because the PL/I language allows character declarations of no more than 32767 bytes,
for BLOB or CLOB host variables that are greater than 32767 bytes in length, DB2
creates host language declarations in the following way:
v If the length of the LOB is greater than 32767 bytes and evenly divisible by 32767,
DB2 creates an array of 32767-byte strings. The dimension of the array is
length/32767.
v If the length of the LOB is greater than 32767 bytes but not evenly divisible by 32767,
DB2 creates two declarations: The first is an array of 32767 byte strings, where the
dimension of the array, n, is length/32767. The second is a character string of length
length-n*32767.
2. Because the PL/I language allows graphic declarations of no more than 16383
double-byte characters, DB2 creates host language declarations in the following way:
v If the length of the LOB is greater than 16383 characters and evenly divisible by
16383, DB2 creates an array of 16383-character strings. The dimension of the array is
length/16383.
v If the length of the LOB is greater than 16383 characters but not evenly divisible by
16383, DB2 creates two declarations: The first is an array of 16383 byte strings, where
the dimension of the array, m, is length/16383. The second is a character string of
length length-m*16383.
LOB materialization
LOB materialization means that DB2 places a LOB value into contiguous storage in
a data space. Because LOB values can be very large, DB2 avoids materializing
LOB data until absolutely necessary. However, DB2 must materialize LOBs when
your application program:
v Calls a user-defined function with a LOB as an argument
v Moves a LOB into or out of a stored procedure
v Assigns a LOB host variable to a LOB locator host variable
v Converts a LOB from one CCSID to another
Data spaces for LOB materialization: The amount of storage that is used in data
spaces for LOB materialization depends on a number of factors including:
v The size of the LOBs
v The number of LOBs that need to be materialized in a statement
DB2 allocates a certain number of data spaces for LOB materialization. If there is
insufficient space available in a data space for LOB materialization, your application
receives SQLCODE -904.
Although you cannot completely avoid LOB materialization, you can minimize it by
using LOB locators, rather than LOB host variables in your application programs.
See “Using LOB locators to save storage” for information on how to use LOB
locators.
A LOB locator is associated with a LOB value or expression, not with a row in a
DB2 table or a physical storage location in a table space. Therefore, after you
select a LOB value using a locator, the value in the locator normally does not
change until the current unit of work ends. However the value of the LOB itself can
change.
If you want to remove the association between a LOB locator and its value before a
unit of work ends, execute the FREE LOCATOR statement. To keep the association
between a LOB locator and its value after the unit of work ends, execute the HOLD
LOCATOR statement. After you execute a HOLD LOCATOR statement, the locator
keeps the association with the corresponding value until you execute a FREE
LOCATOR statement or the program ends.
If you execute HOLD LOCATOR or FREE LOCATOR dynamically, you cannot use
EXECUTE IMMEDIATE. For more information on HOLD LOCATOR and FREE
LOCATOR, see Chapter 5 of DB2 SQL Reference.
Figure 118 on page 290 is a C language program that defers evaluation of a LOB
expression. The program runs on a client and modifies LOB data at a server. The
program searches for a particular resume (EMPNO = ’000130’) in the
EMP_RESUME table. It then uses LOB locators to rearrange a copy of the resume
(with EMPNO = ’A00130’). In the copy, the Department Information Section appears
at the end of the resume. The program then inserts the copy into EMP_RESUME
without modifying the original resume.
Because the program in Figure 118 on page 290 uses LOB locators, rather than
placing the LOB data into host variables, no LOB data is moved until the INSERT
statement executes. In addition, no LOB data moves between the client and the
server.
/**************************/
/* Declare host variables */ 1
/**************************/
EXEC SQL BEGIN DECLARE SECTION;
char userid[9];
char passwd[19];
long HV_START_DEPTINFO;
long HV_START_EDUC;
long HV_RETURN_CODE;
SQL TYPE IS CLOB_LOCATOR HV_NEW_SECTION_LOCATOR;
SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR1;
SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR2;
SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR3;
EXEC SQL END DECLARE SECTION;
/*************************************************/
/* Delete any instance of "A00130" from previous */
/* executions of this sample */
/*************************************************/
EXEC SQL DELETE FROM EMP_RESUME WHERE EMPNO = ’A00130’;
/*************************************************/
/* Use a single row select to get the document */ 2
/*************************************************/
EXEC SQL SELECT RESUME
INTO :HV_DOC_LOCATOR1
FROM EMP_RESUME
WHERE EMPNO = ’000130’
AND RESUME_FORMAT = ’ascii’;
/*****************************************************/
/* Use the POSSTR function to locate the start of */
/* sections "Department Information" and "Education" */ 3
/*****************************************************/
EXEC SQL SET :HV_START_DEPTINFO =
POSSTR(:HV_DOC_LOCATOR1, ’Department Information’);
/*******************************************************/
/* Append the Department Information to the end */
/* of the resume */
/*******************************************************/
EXEC SQL SET :HV_DOC_LOCATOR3 =
:HV_DOC_LOCATOR2 || :HV_NEW_SECTION_LOCATOR;
/*******************************************************/
/* Store the modified resume in the table. This is */ 4
/* where the LOB data really moves. */
/*******************************************************/
EXEC SQL INSERT INTO EMP_RESUME VALUES (’A00130’, ’ascii’,
:HV_DOC_LOCATOR3, DEFAULT);
/*********************/
/* Free the locators */ 5
/*********************/
EXEC SQL FREE LOCATOR :HV_DOC_LOCATOR1, :HV_DOC_LOCATOR2, :HV_DOC_LOCATOR3;
Notes:
1 Declare the LOB locators here.
2 This SELECT statement associates LOB locator
HV_DOC_LOCATOR1 with the value of column RESUME for
employee number 000130.
3 The next five SQL statements use LOB locators to manipulate the
resume data without moving the data.
4 Evaluation of the LOB expressions in the previous statements has
been deferred until execution of this INSERT statement.
5 Free all LOB locators to release them from their associated values.
When you use LOB locators to retrieve data from columns that can contain null
values, define indicator variables for the LOB locators, and check the indicator
variables after you fetch data into the LOB locators. If an indicator variable is null
after a fetch operation, you cannot use the value in the LOB locator.
Chapter 14. Programming for large objects (LOBs) 291
Valid assignments for LOB locators
Although you usually use LOB locators for assigning data to and retrieving data
from LOB columns, you can also use LOB locators to assign data to CHAR,
VARCHAR, GRAPHIC, or VARGRAPHIC columns. However, you cannot fetch data
from CHAR, VARCHAR, GRAPHIC, or VARGRAPHIC columns into LOB locators.
This chapter contains information that applies to all user-defined functions and
specific information about user-defined functions in languages other than Java. For
information about writing, preparing, and running Java user-defined functions, see
DB2 Application Programming Guide and Reference for Java.
The user-defined function’s definer and invoker determine that this new
user-defined function should have these characteristics:
v The user-defined function name is CALC_BONUS.
v The two input fields are of type DECIMAL(9,2).
v The output field is of type DECIMAL(9,2).
v The program for the user-defined function is written in COBOL and has a load
module name of CBONUS.
User-defined function invokers write and prepare application programs that invoke
CALC_BONUS. An invoker might write a statement like this, which uses the
user-defined function to update the BONUS field in the employee table:
UPDATE EMP
SET BONUS = CALC_BONUS(SALARY,COMM);
Notes:
1. This version of ALTDATE has one input parameter, of type VARCHAR(13).
2. This version of ALTDATE has three input parameters, of type VARCHAR(17),
VARCHAR(13), and VARCHAR(13).
3. This version of ALTTIME has one input parameter, of type VARCHAR(14).
4. This version of ALTTIME has three input parameters, of type VARCHAR(11),
VARCHAR(14), and VARCHAR(14).
Member DSN8DUWC contains a client program that shows you how to invoke the
WEATHER user-defined table function.
Member DSNTEJ2U shows you how to define and prepare the sample user-defined
functions and the client program.
Notes:
| 1. RETURNS TABLE and CARDINALITY are valid only for user-defined table functions. For a single query, you can
| override the CARDINALITY value by specifying a CARDINALITY clause for the invocation of a user-defined table
| function in the SELECT statement. For additional information, see “Special techniques to influence access path
| selection” on page 713.
2. An SQL user-defined function can return only one parameter.
3. LANGUAGE SQL is not valid for an external user-defined function.
4. Only LANGUAGE SQL is valid for an SQL user-defined function.
5. MODIFIES SQL DATA and ALLOW PARALLEL are not valid for user-defined table functions.
6. MODIFIES SQL DATA and NO SQL are not valid for SQL user-defined functions.
| 7. PARAMETER STYLE JAVA is valid only with LANGUAGE JAVA. PARAMETER STYLE SQL is valid only with
LANGUAGE values other than LANGUAGE JAVA.
8. RETURNS NULL ON NULL INPUT is not valid for an SQL user-defined function.
The user-defined function takes two integer values as input. The output from the
user-defined function is of type integer. The user-defined function is in the MATH
schema, is written in assembler, and contains no SQL statements. This CREATE
FUNCTION statement defines the user-defined function:
CREATE FUNCTION MATH."/" (INT, INT)
RETURNS INTEGER
SPECIFIC DIVIDE
EXTERNAL NAME ’DIVIDE’
LANGUAGE ASSEMBLE
| PARAMETER STYLE SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION
FENCED;
Suppose that you want the FINDSTRING user-defined function to work on BLOB
data types, as well as CLOB types. You can define another instance of the
user-defined function that specifies a BLOB type as input:
CREATE FUNCTION FINDSTRING (BLOB(500K), VARCHAR(200))
RETURNS INTEGER
CAST FROM FLOAT
SPECIFIC FINDSTRINBLOB
EXTERNAL NAME ’FNDBLOB’
LANGUAGE C
| PARAMETER STYLE SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION
FENCED
| STOP AFTER 3 FAILURES;
The user-defined function is written in COBOL, uses SQL only to perform queries,
always produces the same output for given input, and should not execute as a
parallel task. The program is reentrant, and successive invocations of the
user-defined function share information. You expect an invocation of the
user-defined function to return about 20 rows.
You can write an external user-defined function in assembler, C, C++, COBOL, PL/I,
or Java. User-defined functions that are written in COBOL can include
object-oriented extensions, just as other DB2 COBOL programs can. User-defined
functions that are written in Java follow coding guidelines and restrictions specific to
Java. For information about writing Java user-defined functions, see DB2
Application Programming Guide and Reference for Java.
The following sections include additional information that you need when you write
a user-defined function:
v “Restrictions on user-defined function programs”
v “Coding your user-defined function as a main program or as a subprogram”
v “Parallelism considerations” on page 302
v “Passing parameter values to and from a user-defined function” on page 303
v “Examples of receiving parameters in a user-defined function” on page 315
v “Using special registers in a user-defined function” on page 324
v “Using a scratchpad in a user-defined function” on page 327
v “Accessing transition tables in a user-defined function or stored procedure” on
page 328
If you code your user-defined function as a subprogram and manage the storage
and files yourself, you can get better performance. The user-defined function should
always free any allocated storage before it exits. To keep data between invocations
of the user-defined function, use a scratchpad.
You must code a user-defined table function that accesses external resources as a
subprogram. Also ensure that the definer specifies the EXTERNAL ACTION
parameter in the CREATE FUNCTION or ALTER FUNCTION statement. Program
variables for a subprogram persist between invocations of the user-defined function,
and use of the EXTERNAL ACTION parameter ensures that the user-defined
function stays in the same address space from one invocation to another.
Parallelism considerations
If the definer specifies the parameter ALLOW PARALLEL in the definition of a
user-defined scalar function, and the invoking SQL statement runs in parallel, the
function can run under a parallel task. DB2 executes a separate instance of the
user-defined function for each parallel task. When you write your function program,
you need to understand how the following parameter values interact with ALLOW
PARALLEL so that you can avoid unexpected results:
v SCRATCHPAD
When an SQL statement invokes a user-defined function that is defined with the
ALLOW PARALLEL parameter, DB2 allocates one scratchpad for each parallel
task of each reference to the function. This can lead to unpredictable or incorrect
results.
For example, suppose that the user-defined function uses the scratchpad to
count the number of times it is invoked. If a scratchpad is allocated for each
parallel task, this count is the number of invocations done by the parallel task
and not for the entire SQL statement, which is not the desired result.
v FINAL CALL
If a user-defined function performs an external action, such as sending a note,
for each final call to the function, one note is sent for each parallel task instead
of once for the function invocation.
v EXTERNAL ACTION
Some user-defined functions with external actions can receive incorrect results if
the function is executed by parallel tasks.
For example, if the function sends a note for each initial call to the function, one
note is sent for each parallel task instead of once for the function invocation.
v NOT DETERMINISTIC
A user-defined function that is not deterministic can generate incorrect results if it
is run under a parallel task.
For example, suppose that you execute the following query under parallel tasks:
SELECT * FROM T1 WHERE C1 = COUNTER();
Figure 120 on page 304 shows the structure of the parameter list that DB2 passes
to a user-defined function. An explanation of each parameter follows.
Input parameter values: DB2 obtains the input parameters from the invoker’s
parameter list, and your user-defined function receives those parameters according
to the rules of the host language in which the user-defined function is written. The
number of input parameters is the same as the number of parameters in the
user-defined function invocation. If one of the parameters in the function invocation
is an expression, DB2 evaluates the expression and assigns the result of the
expression to the parameter.
For LOBs, ROWIDs, and locators, see Table 37 for the assembler data types that
are compatible with the data types in the user-defined function definition.
Table 37. Compatible assembler language declarations for LOBs, ROWIDs, and locators
SQL data type in definition Assembler declaration
TABLE LOCATOR DS FL4
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
CLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
DBCLOB(n) If n (=2*n) <= 65534:
var DS 0FL4
var_length DS FL4
var_data DS CLm
If n > 65534:
var DS 0FL4
var_length DS FL4
var_data DS CL65534
ORG var_data+(m-65534)
ROWID DS HL2,CL40
For LOBs, ROWIDs, and locators, see Table 39 for the COBOL data types that are
compatible with the data types in the user-defined function definition.
Table 39. Compatible COBOL declarations for LOBs, ROWIDs, and locators
SQL data type in definition COBOL declaration
TABLE LOCATOR 01 var PIC S9(9) USAGE IS BINARY.
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
For LOBs, ROWIDs, and locators, see Table 40 for the PL/I data types that are
compatible with the data types in the user-defined function definition.
Table 40. Compatible PL/I declarations for LOBs, ROWIDs, and locators
SQL data type in definition PL/I
TABLE LOCATOR BIN FIXED(31)
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 32767:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
CHAR(n);
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
CLOB(n) If n <= 32767:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
CHAR(n);
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
If n > 16383:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
GRAPHIC(16383),
03 var_DATA2
GRAPHIC(mod(n,16383));
ROWID CHAR(40) VAR;
Result parameters: Set these values in your user-defined function before exiting.
For a user-defined scalar function, you return one result parameter. For a
user-defined table function, you return the same number of parameters as columns
in the RETURNS TABLE clause of the CREATE FUNCTION statement. DB2
allocates a buffer for each result parameter value and passes the buffer address to
the user-defined function. Your user-defined function places each result parameter
value in its buffer. You must ensure that the length of the value you place in each
output buffer does not exceed the buffer length. Use the SQL data type and length
in the CREATE FUNCTION statement to determine the buffer length.
See “Passing parameter values to and from a user-defined function” on page 303 to
determine the host data type to use for each result parameter value. If the CREATE
FUNCTION statement contains a CAST FROM clause, use a data type that
corresponds to the SQL data type in the CAST FROM clause. Otherwise, use a
data type that corresponds to the SQL data type in the RETURNS or RETURNS
TABLE clause.
To improve performance for user-defined table functions that return many columns,
you can pass values for a subset of columns to the invoker. For example, a
user-defined table function might be defined to return 100 columns, but the invoker
needs values for only two columns. Use the DBINFO parameter to indicate to DB2
the columns for which you will return values. Then return values for only those
columns. See the explanation of DBINFO on page 313 for information about how to
indicate the columns of interest.
Input parameter indicators: These are SMALLINT values, which DB2 sets before
it passes control to the user-defined function. You use the indicators to determine
whether the corresponding input parameters are null. The number and order of the
indicators are the same as the number and order of the input parameters. On entry
to the user-defined function, each indicator contains one of these values:
0 The input parameter value is not null.
negative The input parameter value is null.
Result indicators: These are SMALLINT values, which you must set before the
user-defined function ends to indicate to the invoking program whether each result
parameter value is null. A user-defined scalar function has one result indicator. A
user-defined table function has the same number of result indicators as the number
of result parameters. The order of the result indicators is the same as the order of
the result parameters. Set each result indicator to one of these values:
0 or positive The result parameter is not null.
negative The result parameter is null.
| SQLSTATE value: This CHAR(5) value represents the SQLSTATE that is passed in
| to the program from the database manager. The initial value is set to ‘00000’.
| Although the SQLSTATE is usually not set by the program, it can be set as the
| result SQLSTATE that is used to return an error or a warning. Returned values that
| start with anything other than ‘00’, ‘01’, or ‘02’ are error conditions.
| Refer to DB2 Messages and Codes for more information about the valid SQLSTATE
| values that a program may generate.
User-defined function name: DB2 sets this value in the parameter list before the
| user-defined function executes. This value is VARCHAR(257): 128 bytes for the
| schema name, 1 byte for a period, and 128 bytes for the user-defined function
| name. If you use the same code to implement multiple versions of a user-defined
function, you can use this parameter to determine which version of the function the
invoker wants to execute.
Specific name: DB2 sets this value in the parameter list before the user-defined
function executes. This value is VARCHAR(128) and is either the specific name
from the CREATE FUNCTION statement or a specific name that DB2 generated. If
you use the same code to implement multiple versions of a user-defined function,
you can use this parameter to determine which version of the function the invoker
wants to execute.
DB2 allocates a 70-byte buffer for this area and passes you the buffer address in
the parameter list. Ensure that you do not write more than 70 bytes to the buffer. At
least the first 17 bytes of the value you put in the buffer appear in the SQLERRMC
field of the SQLCA that is returned to the invoker. The exact number of bytes
depends on the number of other tokens in SQLERRMC. Do not use X'FF' in your
diagnostic message. DB2 uses this value to delimit tokens.
Call type: For a user-defined scalar function, if the definer specified FINAL CALL in
the CREATE FUNCTION statement, DB2 passes this parameter to the user-defined
function. For a user-defined table function, DB2 always passes this parameter to
the user-defined function.
On entry to a user-defined scalar function, the call type parameter has one of the
following values:
-1 This is the first call to the user-defined function for the SQL statement. For
a first call, all input parameters are passed to the user-defined function. In
addition, the scratchpad, if allocated, is set to binary zeros.
0 This is a normal call. For a normal call, all the input parameters are passed
to the user-defined function. If a scratchpad is also passed, DB2 does not
modify it.
1 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application explicitly closes
a cursor. When a value of 1 is passed to a user-defined function, the
user-defined function can execute SQL statements.
255 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application executes a
COMMIT or ROLLBACK statement, or when the invoking application
abnormally terminates. When a value of 255 is passed to the user-defined
function, the user-defined function cannot execute any SQL statements,
except for CLOSE CURSOR. If the user-defined function executes any
close cursor statements during this type of final call, the user-defined
function should tolerate SQLCODE -501 because DB2 might have already
closed cursors before the final call.
During the first call, your user-defined scalar function should acquire any system
resources it needs. During the final call, the user-defined scalar function should
release any resources it acquired during the first call. The user-defined scalar
function should return a result value only during normal calls. DB2 ignores any
results that are returned during a final call. However, the user-defined scalar
function can set the SQLSTATE and diagnostic message area during the final call.
If an invoking SQL statement contains more than one user-defined scalar function,
and one of those user-defined functions returns an error SQLSTATE, DB2 invokes
all of the user-defined functions for a final call, and the invoking SQL statement
receives the SQLSTATE of the first user-defined function with an error.
On entry to a user-defined table function, the call type parameter has one of the
following values:
-2 This is the first call to the user-defined function for the SQL statement. A
first call occurs only if the FINAL CALL keyword is specified in the
During a fetch call, the user-defined table function should return a row. If the
user-defined function has no more rows to return, it should set the SQLSTATE to
02000.
During the close call, a user-defined table function can set the SQLSTATE and
diagnostic message area.
Assembler: Figure 121 shows the parameter conventions for a user-defined scalar
function that is written as a main program that receives two parameters and returns
one result. For an assembler language user-defined function that is a subprogram,
the conventions are the same. In either case, you must include the CEEENTRY and
CEEEXIT macros.
For subprograms, you pass the parameters directly. For main programs, you use
the standard argc and argv variables to access the input and output parameters:
Figure 122 shows the parameter conventions for a user-defined scalar function that
is written as a main program that receives two parameters and returns one result.
#include <stdlib.h>
#include <stdio.h>
main(argc,argv)
int argc;
char *argv[];
{
/***************************************************/
/* Assume that the user-defined function invocation*/
/* included 2 input parameters in the parameter */
/* list. Also assume that the definition includes */
/* the SCRATCHPAD, FINAL CALL, and DBINFO options, */
/* so DB2 passes the scratchpad, calltype, and */
/* dbinfo parameters. */
/* The argv vector contains these entries: */
/* argv[0] 1 load module name */
/* argv[1-2] 2 input parms */
/* argv[3] 1 result parm */
/* argv[4-5] 2 null indicators */
/* argv[6] 1 result null indicator */
/* argv[7] 1 SQLSTATE variable */
/* argv[8] 1 qualified func name */
/* argv[9] 1 specific func name */
/* argv[10] 1 diagnostic string */
/* argv[11] 1 scratchpad */
/* argv[12] 1 call type */
/* argv[13] + 1 dbinfo */
/* ------ */
/* 14 for the argc variable */
/***************************************************/
if argc<>14
{
.
.
.
/**********************************************************/
/* This section would contain the code executed if the */
/* user-defined function is invoked with the wrong number */
/* of parameters. */
/**********************************************************/
}
Figure 122. How a C or C++ user-defined function that is written as a main program receives
parameters (Part 1 of 2)
/***************************************************/
/* Access the null indicator for the first */
/* parameter on the invoked user-defined function */
/* as follows: */
/***************************************************/
short int ind1;
ind1 = *(short int *) argv[4];
/***************************************************/
/* Use the following expression to assign */
/* ’xxxxx’ to the SQLSTATE returned to caller on */
/* the SQL statement that contains the invoked */
/* user-defined function. */
/***************************************************/
strcpy(argv[7],"xxxxx/0");
/***************************************************/
/* Obtain the value of the qualified function */
/* name with this expression. */
/***************************************************/
char f_func[28];
strcpy(f_func,argv[8]);
/***************************************************/
/* Obtain the value of the specific function */
/* name with this expression. */
/***************************************************/
char f_spec[19];
strcpy(f_spec,argv[9]);
/***************************************************/
/* Use the following expression to assign */
/* ’yyyyyyyy’ to the diagnostic string returned */
/* in the SQLCA associated with the invoked */
/* user-defined function. */
/***************************************************/
strcpy(argv[10],"yyyyyyyy/0");
/***************************************************/
/* Use the following expression to assign the */
/* result of the function. */
/***************************************************/
char l_result[11];
strcpy(argv[3],l_result);
.
.
.
}
Figure 122. How a C or C++ user-defined function that is written as a main program receives
parameters (Part 2 of 2)
Figure 123 on page 319 shows the parameter conventions for a user-defined scalar
function written as a C subprogram that receives two parameters and returns one
result.
l_p1 = *parm1;
strcpy(l_p2,parm2);
l_ind1 = *f_ind1;
l_ind1 = *f_ind2;
strcpy(ludf_sqlstate,udf_sqlstate);
strcpy(ludf_fname,udf_fname);
strcpy(ludf_specname,udf_specname);
l_udf_call_type = *udf_call_type;
strcpy(ludf_msgtext,udf_msgtext);
memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad));
memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo));
.
.
.
}
Figure 123. How a C language user-defined function that is written as a subprogram receives
parameters
Figure 124 on page 320 shows the parameter conventions for a user-defined scalar
function that is written as a C++ subprogram that receives two parameters and
returns one result. This example demonstrates that you must use an extern ″C″
modifier to indicate that you want the C++ subprogram to receive parameters
according to the C linkage convention. This modifier is necessary because the
CEEPIPI CALL_SUB interface, which DB2 uses to call the user-defined function,
passes parameters using the C linkage convention.
{
/***************************************************/
/* Define local copies of parameters. */
/***************************************************/
int l_p1;
char l_p2[11];
short int l_ind1;
short int l_ind2;
char ludf_sqlstate[6]; /* SQLSTATE */
char ludf_fname[138]; /* function name */
char ludf_specname[129]; /* specific function name */
char ludf_msgtext[71] /* diagnostic message text*/
sqludf_scratchpad *ludf_scratchpad; /* scratchpad */
long *ludf_call_type; /* call type */
sqludf_dbinfo *ludf_dbinfo /* dbinfo */
/***************************************************/
/* Copy each of the parameters in the parameter */
/* list into a local variable to demonstrate */
/* how the parameters can be referenced. */
/***************************************************/
l_p1 = *parm1;
strcpy(l_p2,parm2);
l_ind1 = *f_ind1;
l_ind1 = *f_ind2;
strcpy(ludf_sqlstate,udf_sqlstate);
strcpy(ludf_fname,udf_fname);
strcpy(ludf_specname,udf_specname);
l_udf_call_type = *udf_call_type;
strcpy(ludf_msgtext,udf_msgtext);
memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad));
memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo));
.
.
.
}
Figure 124. How a C++ user-defined function that is written as a subprogram receives
parameters
COBOL: Figure 125 on page 321 shows the parameter conventions for a
user-defined table function that is written as a main program that receives two
parameters and returns two results. For a COBOL user-defined function that is a
subprogram, the conventions are the same.
PL/I: Figure 126 shows the parameter conventions for a user-defined scalar
function that is written as a main program that receives two parameters and returns
one result. For a PL/I user-defined function that is a subprogram, the conventions
are the same.
*PROCESS SYSTEM(MVS);
MYMAIN: PROC(UDF_PARM1, UDF_PARM2, UDF_RESULT,
UDF_IND1, UDF_IND2, UDF_INDR,
UDF_SQLSTATE, UDF_NAME, UDF_SPEC_NAME,
UDF_DIAG_MSG, UDF_SCRATCHPAD,
UDF_CALL_TYPE, UDF_DBINFO)
OPTIONS(MAIN NOEXECOPS REENTRANT);
Table 41 shows information that you need when you use special registers in a
user-defined function.
Table 41. Characteristics of special registers in a user-defined function
Initial value when Initial value when Function
INHERIT SPECIAL DEFAULT SPECIAL can use
REGISTERS option is REGISTERS option is SET to
Special register specified specified modify?
| CURRENT Inherited from invoking Inherited from invoking Not
| CLIENT_ACCTNG application application applicable5
| CURRENT Inherited from invoking Inherited from invoking Not
| CLIENT_APPLNAME application application applicable5
| CURRENT Inherited from invoking Inherited from invoking Not
| CLIENT_USERID application application applicable5
| CURRENT Inherited from invoking Inherited from invoking Not
| CLIENT_WRKSTNNAME application application applicable5
Notes:
1. If the ENCODING bind option is not specified, the initial value is the value that was
specified in field APPLICATION ENCODING of installation panel DSNTIPF.
2. If the user-defined function is invoked within the scope of a trigger, DB2 uses the
timestamp for the triggering SQL statement as the timestamp for all SQL statements in
the function package.
3. DB2 allows parallelism at only one level of a nested SQL statement. If you set the value
of the CURRENT DEGREE special register to ANY, and parallelism is disabled, DB2
ignores the CURRENT DEGREE value.
4. If the user-defined function definer specifies a value for COLLID in the CREATE
FUNCTION statement, DB2 sets CURRENT PACKAGESET to the value of COLLID.
5. Not applicable because no SET statement exists for the special register.
6. If a program within the scope of the invoking application issues a SET statement for the
special register before the user-defined function is invoked, the special register inherits
the value from the SET statement. Otherwise, the special register contains the value that
is set by the bind option for the user-defined function package.
7. If a program within the scope of the invoking application issues a SET CURRENT SQLID
statement before the user-defined function is invoked, the special register inherits the
value from the SET statement. Otherwise, CURRENT SQLID contains the authorization
ID of the application process.
8. If the user-defined function package uses a value other than RUN for the
DYNAMICRULES bind option, the SET CURRENT SQLID statement can be executed but
does not affect the authorization ID that is used for the dynamic SQL statements in the
user-defined function package. The DYNAMICRULES value determines the authorization
ID that is used for dynamic SQL statements. See “Using DYNAMICRULES to specify
behavior of dynamic SQL statements” on page 479 for more information about
DYNAMICRULES values and authorization IDs.
The scratchpad consists of a 4-byte length field, followed by the scratchpad area.
The definer can specify the length of the scratchpad area in the CREATE
FUNCTION statement. The specified length does not include the length field. The
default size is 100 bytes. DB2 initializes the scratchpad for each function to binary
zeros at the beginning of execution for each subquery of an SQL statement and
does not examine or change the content thereafter. On each invocation of the
user-defined function, DB2 passes the scratchpad to the user-defined function. You
can therefore use the scratchpad to preserve information between invocations of a
reentrant user-defined function.
Figure 127 on page 328 demonstrates how to enter information in a scratchpad for
a user-defined function defined like this:
CREATE FUNCTION COUNTER()
RETURNS INT
SCRATCHPAD
FENCED
The scratchpad length is not specified, so the scratchpad has the default length of
100 bytes, plus 4 bytes for the length field. The user-defined function increments an
integer value and stores it in the scratchpad on each execution.
#pragma linkage(ctr,fetchable)
#include <stdlib.h>
#include <stdio.h>
/* Structure scr defines the passed scratchpad for function ctr */
struct scr {
long len;
long countr;
char not_used[96];
};
/***************************************************************/
/* Function ctr: Increments a counter and reports the value */
/* from the scratchpad. */
/* */
/* Input: None */
/* Output: INTEGER out the value from the scratchpad */
/***************************************************************/
void ctr(
long *out, /* Output answer (counter) */
short *outnull, /* Output null indicator */
char *sqlstate, /* SQLSTATE */
char *funcname, /* Function name */
char *specname, /* Specific function name */
char *mesgtext, /* Message text insert */
struct scr *scratchptr) /* Scratchpad */
{
*out = ++scratchptr->countr; /* Increment counter and */
/* copy to output variable */
*outnull = 0; /* Set output null indicator*/
return;
}
/* end of user-defined function ctr */
To access transition tables in a user-defined function, use table locators, which are
pointers to the transition tables. You declare table locators as input parameters in
the CREATE FUNCTION statement using the TABLE LIKE table-name AS
LOCATOR clause. See Chapter 5 of DB2 SQL Reference for more information.
The five basic steps to accessing transition tables in a user-defined function are:
1. Declare input parameters to receive table locators. You must define each
parameter that receives a table locator as an unsigned 4-byte integer.
The following examples show how a user-defined function that is written in C, C++,
COBOL, or PL/I accesses a transition table for a trigger. The transition table,
NEWEMP, contains modified rows of the employee sample table. The trigger is
defined like this:
CREATE TRIGGER EMPRAISE
AFTER UPDATE ON EMP
REFERENCING NEW TABLE AS NEWEMPS
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES (CHECKEMP(TABLE NEWEMPS));
END;
Assembler: Figure 128 on page 330 shows how an assembler program accesses
rows of transition table NEWEMPS.
Figure 128. How an assembler user-defined function accesses a transition table (Part 1 of 2)
Figure 128. How an assembler user-defined function accesses a transition table (Part 2 of 2)
C or C++: Figure 129 shows how a C or C++ program accesses rows of transition
table NEWEMPS.
COBOL: Figure 130 on page 332 shows how a COBOL program accesses rows of
transition table NEWEMPS.
PL/I: Figure 131 on page 333 shows how a PL/I program accesses rows of
transition table NEWEMPS.
| When the primary program of a user-defined function calls another program, DB2
| uses the CURRENT PACKAGE PATH special register to determine the list of
| collections to search for the called program’s package. The primary program can
| change this collection ID by executing the statement SET CURRENT PACKAGE
| PATH.
| If the value of CURRENT PACKAGE PATH is blank, DB2 uses the CURRENT
| PACKAGESET special register to determine the collection to search for the called
| program’s package. The primary program can change this value by executing the
| statement SET CURRENT PACKAGESET.
To maximize the number of user-defined functions and stored procedures that can
run concurrently, follow these preparation recommendations:
v Ask the system administrator to set the region size parameter in the startup
procedures for the WLM-established stored procedures address spaces to
REGION=0. This lets an address space obtain the largest possible amount of
storage below the 16-MB line.
v Limit storage required by application programs below the 16-MB line by:
– Link-editing programs with the AMODE(31) and RMODE(ANY) attributes
– Compiling COBOL programs with the RES and DATA(31) options
v Limit storage that is required by Language Environment by using these run-time
options:
HEAP(,,ANY) Allocates program heap storage above the
16-MB line
STACK(,,ANY,) Allocates program stack storage above the
16-MB line
STORAGE(,,,4K) Reduces reserve storage area below the line to 4
KB
BELOWHEAP(4K,,) Reduces the heap storage below the line to 4 KB
LIBSTACK(4K,,) Reduces the library stack below the line to 4 KB
ALL31(ON) Causes all programs contained in the external
user-defined function to execute with AMODE(31)
and RMODE(ANY)
The definer can list these options as values of the RUN OPTIONS parameter of
CREATE FUNCTION, or the system administrator can establish these options as
defaults during Language Environment installation.
For example, the RUN OPTIONS option parameter could contain:
H(,,ANY),STAC(,,ANY,),STO(,,,4K),BE(4K,,),LIBS(4K,,),ALL31(ON)
v Ask the system administrator to set the NUMTCB parameter for WLM-established
stored procedures address spaces to a value greater than 1. This lets more than
one TCB run in an address space. Be aware that setting NUMTCB to a value
greater than 1 also reduces your level of application program isolation. For
example, a bad pointer in one application can overwrite memory that is allocated
by another application.
Debug Tool for z/OS: You can use the Debug Tool for z/OS, which works with
Language Environment, to test DB2 UDB for z/OS user-defined functions written in
any of the supported languages. You can use the Debug Tool either interactively or
in batch mode.
This should be the first command that you enter from the terminal or include in
your commands file.
Using the Debug Tool in batch mode: To test your user-defined function in batch
mode, you must have the Debug Tool installed on the z/OS system where the
user-defined function runs. To debug your user-defined function in batch mode
using the Debug Tool, do the following:
1. If you plan to use the Language Environment run-time TEST option to invoke
the Debug Tool, compile the user-defined function with the TEST option. This
places information in the program that the Debug Tool uses during a debugging
session.
2. Allocate a log data set to receive the output from the Debug Tool. Put a DD
statement for the log data set in the startup procedure for the stored procedures
address space.
3. Enter commands in a data set that you want the Debug Tool to execute. Put a
DD statement for that data set in the startup procedure for the stored
procedures address space. To define the data set that contains the commands
to the Debug Tool, specify its data set name or DD name in the TEST run-time
option. For example, this option tells the Debug Tool to look for the commands
in the data set that is associated with DD name TESTDD:
For more information about the Debug Tool, see Debug Tool User's Guide and
Reference.
Driver applications: You can write a small driver application that calls the
user-defined function as a subprogram and passes the parameter list for the
user-defined function. You can then test and debug the user-defined function as a
normal DB2 application under TSO. You can then use TSO TEST and other
commonly used debugging tools.
Using SQL INSERT statements: You can use SQL to insert debugging information
into a DB2 table. This allows other machines in the network (such as a workstation)
to easily access the data in the table using DRDA access.
DB2 discards the debugging information if the application executes the ROLLBACK
statement. To prevent the loss of the debugging data, code the calling application
so that it retrieves the diagnostic data before executing the ROLLBACK statement.
See “Defining a user-defined function” on page 296 and Chapter 5 of DB2 SQL
Reference for a description of the parameters that you can specify in the CREATE
FUNCTION statement for an SQL scalar function.
To prepare an SQL scalar function for execution, you execute the CREATE
FUNCTION statement, either statically or dynamically.
See the following sections for details you should know before you invoke a
user-defined function:
v “Syntax for user-defined function invocation”
v “Ensuring that DB2 executes the intended user-defined function” on page 339
v “Casting of user-defined function arguments” on page 345
v “What happens when a user-defined function abnormally terminates” on page
346
function-name ( )
ALL ,
DISTINCT
expression
TABLE transition-table-name
Use the syntax shown in Figure 133 on page 339 when you invoke a table function:
expression
TABLE transition-table-name
AS
correlation-name
,
( column-name )
See Chapter 2 of DB2 SQL Reference for more information about the syntax of
user-defined function invocation.
The remainder of this section discusses details of the function resolution process
and gives suggestions on how you can ensure that DB2 picks the right function.
To determine whether a data type is promotable to another data type, see Table 42.
The first column lists data types in function invocations. The second column lists
data types to which the types in the first column can be promoted, in order from
best fit to worst fit. For example, suppose that in this statement, the data type of A
is SMALLINT:
SELECT USER1.ADDTWO(A) FROM TABLEA;
Notes:
1. This promotion also applies if the parameter type in the invocation is a LOB locator for a
LOB with this data type.
2. The FLOAT type with a length of less than 22 is equivalent to REAL.
3. The FLOAT type with a length of greater than or equal to 22 is equivalent to DOUBLE.
If the data types of all parameters in a function instance are the same as those in
the function invocation, that function instance is a best fit. If no exact match exists,
DB2 compares data types in the parameter lists from left to right, using this method:
1. DB2 compares the data types of the first parameter in the function invocation to
the data type of the first parameter in each function instance.
If the first parameter in the invocation is an untyped parameter marker, DB2
does not do the comparison.
2. For the first parameter, if one function instance has a data type that fits the
function invocation better than the data types in the other instances, that
function is a best fit. Table 42 on page 341 shows the possible fits for each data
type, in best-to-worst order.
3. If the data types of the first parameter are the same for all function instances, or
if the first parameter in the function invocation is an untyped parameter marker,
DB2 repeats this process for the next parameter. DB2 continues this process for
each parameter until it finds a best fit.
Candidate 2:
CREATE FUNCTION FUNC(VARCHAR(20),REAL,DOUBLE)
RETURNS DECIMAL(9,2)
EXTERNAL NAME ’FUNC2’
| PARAMETER STYLE SQL
LANGUAGE COBOL;
DB2 compares the data type of the first parameter in the user-defined function
invocation to the data types of the first parameters in the candidate functions.
Because the first parameter in the invocation has data type VARCHAR, and both
candidate functions also have data type VARCHAR, DB2 cannot determine the
better candidate based on the first parameter. Therefore, DB2 compares the data
types of the second parameters.
The data type of the second parameter in the invocation is SMALLINT. INTEGER,
which is the data type of candidate 1, is a better fit to SMALLINT than REAL, which
is the data type of candidate 2. Therefore, candidate 1 is the DB2 choice for
execution.
When you invoke a user-defined function that is sourced on another function, DB2
casts your parameters to the data types and lengths of the sourced function.
The following example demonstrates what happens when the parameter definitions
of a sourced function differ from those of the function on which it is sourced.
Now suppose that PRICE2 has the DECIMAL(9,2) value 0001234.56. DB2 must
first assign this value to the data type of the input parameter in the definition of
TAXFN2, which is DECIMAL(8,2). The input parameter value then becomes
001234.56. Next, DB2 casts the parameter value to a source function parameter,
which is DECIMAL(6,0). The parameter value then becomes 001234. (When you
cast a value, that value is truncated, rather than rounded.)
Now, if TAXFN1 returns the DECIMAL(5,2) value 123.45, DB2 casts the value to
DECIMAL(5,0), which is the result type for TAXFN2, and the value becomes 00123.
This is the value that DB2 assigns to column SALESTAX2 in the UPDATE
statement.
You should include code in your program to check for a user-defined function abend
and to roll back the unit of work that contains the user-defined function invocation.
Although trigger activations count in the levels of SQL statement nesting, the
previous restrictions on SQL statements do not apply to SQL statements that are
executed in the trigger body. For example, suppose that trigger TR1 is defined on
table T1:
CREATE TRIGGER TR1
AFTER INSERT ON T1
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
UPDATE T1 SET C1=1;
END
Now suppose that you execute this SQL statement at level 1 of nesting:
INSERT INTO T1 VALUES(...);
Although the UPDATE statement in the trigger body is at level 2 of nesting and
modifies the same table that the triggering statement updates, DB2 can execute the
INSERT statement successfully.
The access path that DB2 chooses for a predicate determines whether a
user-defined function in that predicate is executed. To ensure that DB2 executes the
external action for each row of the result set, put the user-defined function
invocation in the SELECT list.
The results can differ even more, depending on the order in which DB2 retrieves
the rows from the table. Suppose that an ascending index is defined on column C2.
Then DB2 retrieves row 3 first, row 1 second, and row 2 third. This means that row
1 satisfies the predicate WHERE COUNTER()=2. The value of COUNTER in the
select list is again 1, so the result of the query in this case is:
COUNTER() C1 C2
--------- -- --
1 1 b
A similar situation occurs with scrollable cursors and nondeterministic functions. The
result of a nondeterministic user-defined function can be different each time you
execute the user-defined function. If the select list of a scrollable cursor contains a
nondeterministic user-defined function, and you use that cursor to retrieve the same
row multiple times, the results can differ each time you retrieve the row.
For more information on LOB data, see Chapter 14, “Programming for large objects
(LOBs),” on page 281.
After you define distinct types and columns of those types, you can use those data
types in the same way you use built-in types. You can use the data types in
assignments, comparisons, function invocations, and stored procedure calls.
However, when you assign one column value to another or compare two column
values, those values must be of the same distinct type. For example, you must
assign a column value of type VIDEO to a column of type VIDEO, and you can
compare a column value of type AUDIO only to a column of type AUDIO. When you
assign a host variable value to a column with a distinct type, you can use any host
data type that is compatible with the source data type of the distinct type. For
example, to receive an AUDIO or VIDEO value, you can define a host variable like
this:
SQL TYPE IS BLOB (1M) HVAV;
For example, if you have defined a user-defined function to convert U.S. dollars to
euro currency, you do not want anyone to use this same user-defined function to
convert Japanese Yen to euros because the U.S. dollars to euros function returns
the wrong amount. Suppose you define three distinct types:
| CREATE DISTINCT TYPE US_DOLLAR AS DECIMAL(9,2);
| CREATE DISTINCT TYPE EURO AS DECIMAL(9,2);
| CREATE DISTINCT TYPE JAPANESE_YEN AS DECIMAL(9,2);
DB2 does not let you compare data of a distinct type directly to data of its source
type. However, you can compare a distinct type to its source type by using a cast
function.
For example, suppose you want to know which products sold more than
$100 000.00 in the US in the month of July in 2003 (7/03). Because you cannot
compare data of type US_DOLLAR with instances of data of the source type of
US_DOLLAR (DECIMAL) directly, you must use a cast function to cast data from
DECIMAL to US_DOLLAR or from US_DOLLAR to DECIMAL. Whenever you
create a distinct type, DB2 creates two cast functions, one to cast from the source
type to the distinct type and the other to cast from the distinct type to the source
type. For distinct type US_DOLLAR, DB2 creates a cast function called DECIMAL
and a cast function called US_DOLLAR. When you compare an object of type
US_DOLLAR to an object of type DECIMAL, you can use one of those cast
functions to make the data types identical for the comparison. Suppose table
US_SALES is defined like this:
CREATE TABLE US_SALES
(PRODUCT_ITEM INTEGER,
MONTH INTEGER CHECK (MONTH BETWEEN 1 AND 12),
YEAR INTEGER CHECK (YEAR > 1990),
TOTAL US_DOLLAR);
The casting satisfies the requirement that the compared data types are identical.
350 Application Programming and SQL Guide
You cannot use host variables in statements that you prepare for dynamic
execution. As explained in “Using parameter markers with PREPARE and
EXECUTE” on page 548, you can substitute parameter markers for host variables
when you prepare a statement, and then use host variables when you execute the
statement.
If you use a parameter marker in a predicate of a query, and the column to which
you compare the value represented by the parameter marker is of a distinct type,
you must cast the parameter marker to the distinct type, or cast the column to its
source type.
For example, suppose that distinct type CNUM is defined like this:
| CREATE DISTINCT TYPE CNUM AS INTEGER;
Alternatively, you can cast the parameter marker to the distinct type:
SELECT FIRST_NAME, LAST_NAME, PHONE_NUM FROM CUSTOMER
WHERE CUST_NUM = CAST (? AS CNUM)
If you need to assign a value of one distinct type to a column of another distinct
type, a function must exist that converts the value from one type to another.
Because DB2 provides cast functions only between distinct types and their source
types, you must write the function to convert from one distinct type to another.
You need to insert values from the TOTAL column in JAPAN_SALES into the
TOTAL column of JAPAN_SALES_03. Because INSERT statements follow
assignment rules, DB2 does not let you insert the values directly from one column
You can assign a column value of a distinct type to a host variable if you can assign
a column value of the distinct type’s source type to the host variable. In the
following example, you can assign SIZECOL1 and SIZECOL2, which has distinct
type SIZE, to host variables of type double and short because the source type of
SIZE, which is INTEGER, can be assigned to host variables of type double or short.
EXEC SQL BEGIN DECLARE SECTION;
double hv1;
short hv2;
EXEC SQL END DECLARE SECTION;
CREATE DISTINCT TYPE SIZE AS INTEGER;
CREATE
. TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE);
.
.
SELECT SIZECOL1, SIZECOL2
INTO :hv1, :hv2
FROM TABLE1;
In this example, values of host variable hv2 can be assigned to columns SIZECOL1
and SIZECOL2, because C data type short is equivalent to DB2 data type
SMALIINT, and SMALLINT is promotable to data type INTEGER. However, values
of hv1 cannot be assigned to SIZECOL1 and SIZECOL2, because C data type
double, which is equivalent to DB2 data type DOUBLE, is not promotable to data
type INTEGER.
EXEC SQL BEGIN DECLARE SECTION;
double hv1;
short hv2;
EXEC SQL END DECLARE SECTION;
CREATE DISTINCT TYPE SIZE AS INTEGER;
CREATE
. TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE);
.
.
INSERT INTO TABLE1
VALUES (:hv1,:hv1); /* Invalid statement */
INSERT INTO TABLE1
VALUES (:hv2,:hv2); /* Valid statement */
Because the result type of both US_DOLLAR functions is US_DOLLAR, you have
satisfied the requirement that the distinct types of the combined columns are the
same.
The HOUR function takes only the TIME or TIMESTAMP data type as an argument,
so you need a sourced function that is based on the HOUR function that accepts
the FLIGHT_TIME data type. You might declare a function like this:
Example: Using an infix operator with distinct type arguments: Suppose you
want to add two values of type US_DOLLAR. Before you can do this, you must
define a version of the + function that accepts values of type US_DOLLAR as
operands:
CREATE FUNCTION "+"(US_DOLLAR,US_DOLLAR)
RETURNS US_DOLLAR
SOURCE SYSIBM."+"(DECIMAL(9,2),DECIMAL(9,2));
Because the US_DOLLAR type is based on the DECIMAL(9,2) type, the source
function must be the version of + with arguments of type DECIMAL(9,2).
This means that EURO_TO_US accepts only the EURO type as input. Therefore, if
you want to call CDN_TO_US with a constant or host variable argument, you must
cast that argument to distinct type EURO:
SELECT * FROM US_SALES
WHERE TOTAL = EURO_TO_US(EURO(:H1));
SELECT * FROM US_SALES
WHERE TOTAL = EURO_TO_US(EURO(10000));
Suppose that you keep electronic mail documents that are sent to your company in
a DB2 table. The DB2 data type of an electronic mail document is a CLOB, but you
define it as a distinct type so that you can control the types of operations that are
performed on the electronic mail. The distinct type is defined like this:
CREATE DISTINCT TYPE E_MAIL AS CLOB(5M);
The table that contains the electronic mail documents is defined like this:
CREATE TABLE DOCUMENTS
(LAST_UPDATE_TIME TIMESTAMP,
DOC_ROWID ROWID NOT NULL GENERATED ALWAYS,
A_DOCUMENT E_MAIL);
Because the table contains a column with a source data type of CLOB, the table
requires an associated LOB table space, auxiliary table, and index on the auxiliary
table. Use statements like this to define the LOB table space, the auxiliary table,
and the index:
To populate the document table, you write code that executes an INSERT statement
to put the first part of a document in the table, and then executes multiple UPDATE
statements to concatenate the remaining parts of the document. For example:
EXEC SQL BEGIN DECLARE SECTION;
char hv_current_time[26];
SQL TYPE IS CLOB (1M) hv_doc;
EXEC SQL END DECLARE SECTION;
/* Determine the current time and put this value */
/* into host variable hv_current_time. */
/* Read up to 1 MB of document data from a file */
/*
. into host variable hv_doc. */
.
.
/* Insert the time value and the first 1 MB of */
/* document data into the table. */
EXEC SQL INSERT INTO DOCUMENTS
VALUES(:hv_current_time, DEFAULT, E_MAIL(:hv_doc));
Now that the data is in the table, you can execute queries to learn more about the
documents. For example, you can execute this query to determine which
documents contain the word ″performance″:
SELECT SENDER(A_DOCUMENT), SENDING_DATE(A_DOCUMENT),
SUBJECT(A_DOCUMENT)
FROM DOCUMENTS
WHERE CONTAINS(A_DOCUMENT,’performance’) = 1;
Because the electronic mail documents can be very large, you might want to use
LOB locators to manipulate the document data instead of fetching all of a document
into a host variable. You can use a LOB locator on any distinct type that is defined
on one of the LOB types. The following example shows how you can cast a LOB
locator as a distinct type, and then use the result in a user-defined function that
takes a distinct type as an argument:
EXEC SQL BEGIN DECLARE SECTION
long hv_len;
char hv_subject[200];
SQL TYPE IS CLOB_LOCATOR hv_email_locator;
EXEC
. SQL END DECLARE SECTION
.
.
/* Select a document into a CLOB locator. */
EXEC SQL SELECT A_DOCUMENT, SUBJECT(A_DOCUMENT)
INTO :hv_email_locator, :hv_subject
FROM DOCUMENTS
WHERE LAST_UPDATE_TIME = :hv_current_time;
For specific information about accomplishing the steps in program preparation, see
Chapter 21, “Preparing an application program to run,” on page 453.
Figure 135 on page 364 illustrates the program preparation process when you use
the DB2 precompiler. After you process SQL statements in your source program
| using the DB2 precompiler, you create a load module, possibly one or more
packages, and an application plan. Creating a load module involves compiling the
| modified source code that is produced by the precompiler into an object program,
| and link-editing the object program to create a load module. Creating a package or
an application plan, a process unique to DB2, involves binding one or more
| DBRMs, which are created by the DB2 precompiler, using the BIND PACKAGE or
| BIND PLAN commands.
Figure 136 on page 365 illustrates the program preparation process when you use
| an SQL statement coprocessor. The process is similar to the process used with the
| DB2 precompiler, except that the SQL statement coprocessor does not create
| modified source for your application program.
A few options, however, can affect the way that you write your program. For
example, you need to know if you are using NOFOR or STDSQL(YES) before you
begin coding.
Before you begin writing your program, review the list of options in Table 63 on
page 462. You can specify any of those options whether you use the DB2
precompiler or an SQL statement coprocessor. However, the SQL statement
coprocessor might ignore certain options because there are host language compiler
options that provide the same information.
Binding or rebinding a package or plan in use: Packages and plans are locked
when you bind or run them. Packages that run under a plan are not locked until the
plan uses them. If you run a plan and some packages in the package list never run,
those packages are never locked.
You cannot bind or rebind a package or a plan while it is running. However, you can
bind a different version of a package that is running.
Options for binding and rebinding: Several of the options of BIND PACKAGE
and BIND PLAN can affect your program design. For example, you can use a bind
option to ensure that a package or plan can run only from a particular CICS
connection or a particular IMS region—you do not need to enforce this in your
code. Several other options are discussed at length in later chapters, particularly
the ones that affect your program’s use of locks, such as the ISOLATION option.
Before you finish reading this chapter, you might want to review those options in
Chapter 2 of DB2 Command Reference.
Input to binding the plan can include DBRMs only, a package list only, or a
combination of the two. When choosing one of those alternatives for your
application, consider the impact of rebinding; see “Planning for changes to your
application” on page 368.
Binding all DBRMs to a plan is suitable for small applications that are unlikely to
change or that require all resources to be acquired when the plan is allocated rather
than when your program first uses them.
Advantages of packages
You must decide how to use packages based on your application design and your
operational objectives. The following are advantages of using packages:
Ease of maintenance: When you use packages, you do not need to bind the entire
plan again when you change one SQL statement. You need to bind only the
package that is associated with the changed SQL statement.
Flexibility in using bind options: The options of BIND PLAN apply to all DBRMs
that are bound directly to the plan. The options of BIND PACKAGE apply only to the
single DBRM that is bound to that package. The package options need not all be
the same as the plan options, and they need not be the same as the options for
other packages that are used by the same plan.
Flexibility in using name qualifiers: You can use a bind option to name a qualifier
for the unqualified object names in SQL statements in a plan or package. By using
packages, you can use different qualifiers for SQL statements in different parts of
your application. By rebinding, you can redirect your SQL statements, for example,
from a test table to a production table.
A change to your program probably invalidates one or more of your packages and
perhaps your entire plan. For some changes, you must bind a new object; for
others, rebinding is sufficient.
v To bind a new plan or package, other than a trigger package, use the
subcommand BIND PLAN or BIND PACKAGE with the option
ACTION(REPLACE).
To bind a new trigger package, recreate the trigger associated with the trigger
package.
v To rebind an existing plan or package, other than a trigger package, use the
REBIND subcommand.
To rebind trigger package, use the REBIND TRIGGER PACKAGE subcommand.
Table 43 tells which action particular types of change require. For more information
about trigger packages, see “Working with trigger packages” on page 371.
If you want to change the bind options in effect when the plan or package runs,
review the descriptions of those bind options in Part 3 of DB2 Command Reference.
Some options of BIND are not available on REBIND.
A plan or package can also become invalid for reasons that do not depend on
operations in your program (such as when an index is dropped that is used as an
access path by one of your queries). In those cases, DB2 might rebind the plan or
package automatically, the next time it is used. (For details about automatic
rebinding, see “Automatic rebinding” on page 371.)
Table 43. Changes requiring BIND or REBIND
Change made Minimum action necessary
Drop a table, index, or other object, If a table with a trigger is dropped, recreate the trigger
and recreate the object if you recreate the table. Otherwise, no change is
required; automatic rebind is attempted at the next run.
Revoke an authorization to use an None required; automatic rebind is attempted at the
object next run. Automatic rebind fails if authorization is still
not available; then you must issue REBIND for the
package or plan.
Run RUNSTATS to update catalog Issue REBIND for the package or plan to possibly
statistics change the access path that DB2 uses.
Add an index to a table Issue REBIND for the package or plan to use the
index.
Dropping objects
If you drop an object that a package depends on, the package might become
invalid for the following reasons:
v If the package is not appended to any running plan, the package becomes
invalid.
v If the package is appended to a running plan, and the drop occurs within that
plan, the package becomes invalid.
However, if the package is appended to a running plan, and the drop occurs
outside of that plan, the object is not dropped, and the package does not become
invalid.
In all cases, the plan does not become invalid until it has a DBRM that references
the dropped object. If the package or plan becomes invalid, automatic rebind occurs
the next time the package or plan is allocated.
Rebinding a package
Table 44 clarifies which packages are bound, depending on how you specify
collection-id (coll-id), package-id (pkg-id), and version-id (ver-id) on the REBIND
PACKAGE subcommand. For syntax and descriptions of this subcommand, see
Part 3 of DB2 Command Reference.
REBIND PACKAGE does not apply to packages for which you do not have the
BIND privilege. An asterisk (*) used as an identifier for collections, packages, or
versions does not apply to packages at remote sites.
Table 44. Behavior of REBIND PACKAGE specification. ″All″ means all collections,
packages, or versions at the local DB2 server for which the authorization ID that issues the
command has the BIND privilege.
Collections Packages Versions
Input affected affected affected
Example: The following example shows the options for rebinding a package at the
remote location. The location name is SNTERSA. The collection is GROUP1, the
package ID is PROGA, and the version ID is V1. The connection types shown in
the REBIND subcommand replace connection types that are specified on the
original BIND subcommand. For information about the REBIND subcommand
options, see DB2 Command Reference.
REBIND PACKAGE(SNTERSA.GROUP1.PROGA.(V1)) ENABLE(CICS,REMOTE)
You can use the asterisk on the REBIND subcommand for local packages, but not
for packages at remote sites. Any of the following commands rebinds all versions of
all packages in all collections, at the local DB2 system, for which you have the
BIND privilege.
REBIND PACKAGE (*)
REBIND PACKAGE (*.*)
REBIND PACKAGE (*.*.(*))
Either of the following commands rebinds all versions of all packages in the local
collection LEDGER for which you have the BIND privilege.
REBIND PACKAGE (LEDGER.*)
REBIND PACKAGE (LEDGER.*.(*))
Either of the following commands rebinds the empty string version of the package
DEBIT in all collections, at the local DB2 system, for which you have the BIND
privilege.
REBIND PACKAGE (*.DEBIT)
REBIND PACKAGE (*.DEBIT.())
Rebinding a plan
When you rebind a plan, use the PKLIST keyword to replace any previously
specified package list. Omit the PKLIST keyword to use of the previous package list
for rebinding. Use the NOPKLIST keyword to delete any package list that was
specified when the plan was previously bound.
One situation in which this technique is useful is to complete a rebind operation that
has terminated due to lack of resources. A rebind for many objects, such as
REBIND PACKAGE (*) for an ID with SYSADM authority, terminates if a needed
resource becomes unavailable. As a result, some objects are successfully rebound
and others are not. If you repeat the subcommand, DB2 attempts to rebind all the
objects again. But if you generate a rebind subcommand for each object that was
not rebound, and issue those subcommands, DB2 does not repeat any work that
was already done and is not likely to run out of resources.
For a description of the technique and several examples of its use, see Appendix F,
“REBIND subcommands for lists of plans or packages,” on page 1003.
As with any other package, DB2 marks a trigger package invalid when you drop a
table, index, or view on which the trigger package depends. DB2 executes an
automatic rebind the next time the trigger is activated. However, if the automatic
rebind fails, DB2 does not mark the trigger package as inoperative.
Unlike other packages, a trigger package is freed if you drop the table on which the
trigger is defined, so you can recreate the trigger package only by recreating the
table and the trigger.
You can use the subcommand REBIND TRIGGER PACKAGE to rebind a trigger
package that DB2 has marked as inoperative. You can also use REBIND TRIGGER
PACKAGE to change the option values with which DB2 originally bound the trigger
package. The default values for the options that you can change are:
v CURRENTDATA(YES)
v EXPLAIN(YES)
v FLAG(I)
v ISOLATION(RR)
v IMMEDWRITE(NO)
v RELEASE(COMMIT)
When you run REBIND TRIGGER PACKAGE, you can change only the values of
options CURRENTDATA, EXPLAIN, FLAG, IMMEDWRITE, ISOLATION, and
RELEASE.
Automatic rebinding
Automatic rebind might occur if an authorized user invokes a plan or package when
the attributes of the data on which the plan or package depends change, or if the
environment in which the package executes changes. Whether the automatic rebind
occurs depends on the value of the field AUTO BIND on installation panel
DSNTIPO. The options used for an automatic rebind are the options used during
the most recent bind process.
In the following cases, DB2 might automatically rebind a plan or package that has
not been marked as invalid:
| v A plan or package that is bound on release of DB2 that is more recent than the
| release in which it is being run. This can happen in a data sharing environment,
| or it can happen after a DB2 subsystem has fallen back to a previous release of
| DB2.
| v A plan or package that was bound prior to DB2 Version 2 Release 3. Plans and
| packages that are bound prior to Version 2 Release 3 will be automatically
| rebound when they are run on the current release of DB2.
v A plan or package that has a location dependency and runs at a location other
than the one at which it was bound. This can happen when members of a data
sharing group are defined with location names, and a package runs on a different
member from the one on which it was bound.
The SQLCA is not available during automatic rebind. Therefore, if you encounter
lock contention during an automatic rebind, DSNT501I messages cannot
accompany any DSNT376I messages that you receive. To see the matching
DSNT501I messages, you must issue the subcommand REBIND PLAN or REBIND
PACKAGE.
After the basic recommendations, the chapter covers some of the major techniques
that DB2 uses to control concurrency:
v Transaction locks mainly control access by SQL statements. Those locks are
the ones over which you have the most control.
– “Aspects of transaction locks” on page 384 describes the various types of
transaction locks that DB2 uses and how they interact.
– “Lock tuning” on page 390 describes what you can change to control locking.
Your choices include:
- “Bind options” on page 390
- “Isolation overriding with SQL statements” on page 403
- “The statement LOCK TABLE” on page 404
Under those headings, lock (with no qualifier) refers to transaction lock.
v Claims and drains control access by DB2 utilities and commands. For
information about them, see DB2 Administration Guide.
v Physical locks are of concern only if you are using DB2 data sharing. For
information about that, see DB2 Data Sharing: Planning and Administration.
To prevent those situations from occurring unless they are specifically allowed, DB2
might use locks to control concurrency.
What do locks do? A lock associates a DB2 resource with an application process
in a way that affects how other processes can access the same resource. The
process associated with the resource is said to “hold” or “own” the lock. DB2 uses
locks to ensure that no process accesses data that has been changed, but not yet
committed, by another process.
What do you do about locks? To preserve data integrity, your application process
acquires locks implicitly, that is, under DB2 control. It is not necessary for a process
to request a lock explicitly to conceal uncommitted data. Therefore, sometimes you
need not do anything about DB2 locks. Nevertheless processes acquire, or avoid
acquiring, locks based on certain general parameters. You can make better use of
your resources and improve concurrency by understanding the effects of those
parameters.
Suspension
Definition: An application process is suspended when it requests a lock that is
already held by another application process and cannot be shared. The suspended
process temporarily stops running.
Order of precedence for lock requests: Incoming lock requests are queued.
Requests for lock promotion, and requests for a lock by an application process that
already holds a lock on the same object, precede requests for locks by new
applications. Within those groups, the request order is “first in, first out”.
Example: Using an application for inventory control, two users attempt to reduce
the quantity on hand of the same item at the same time. The two lock requests are
queued. The second request in the queue is suspended and waits until the first
request releases its lock.
Timeout
Definition: An application process is said to time out when it is terminated because
it has been suspended for longer than a preset interval.
IMS
If you are using IMS, and a timeout occurs, the following actions take place:
v In a DL/I batch application, the application process abnormally terminates
with a completion code of 04E and a reason code of 00D44033 or
00D44050.
v In any IMS environment except DL/I batch:
– DB2 performs a rollback operation on behalf of your application process
to undo all DB2 updates that occurred during the current unit of work.
– For a non-message driven BMP, IMS issues a rollback operation on
behalf of your application. If this operation is successful, IMS returns
control to your application, and the application receives SQLCODE -911.
If the operation is unsuccessful, IMS issues user abend code 0777, and
the application does not receive an SQLCODE.
– For an MPP, IFP, or message driven BMP, IMS issues user abend code
0777, rolls back all uncommitted changes, and reschedules the
transaction. The application does not receive an SQLCODE.
COMMIT and ROLLBACK operations do not time out. The command STOP
DATABASE, however, may time out and send messages to the console, but it will
retry up to 15 times.
Deadlock
Definition: A deadlock occurs when two or more application processes each hold
locks on resources that the others need and without which they cannot proceed.
Example: Figure 137 on page 378 illustrates a deadlock between two transactions.
Table M (2) OK
(4)
000300 Page B Job PROJNCHG
Suspend
Notes:
1. Jobs EMPLJCHG and PROJNCHG are two transactions. Job EMPLJCHG
accesses table M, and acquires an exclusive lock for page B, which contains
record 000300.
2. Job PROJNCHG accesses table N, and acquires an exclusive lock for page A,
which contains record 000010.
3. Job EMPLJCHG requests a lock for page A of table N while still holding the lock
on page B of table M. The job is suspended, because job PROJNCHG is
holding an exclusive lock on page A.
4. Job PROJNCHG requests a lock for page B of table M while still holding the
lock on page A of table N. The job is suspended, because job EMPLJCHG is
holding an exclusive lock on page B. The situation is a deadlock.
Effects: After a preset time interval (the value of DEADLOCK TIME), DB2 can roll
back the current unit of work for one of the processes or request a process to
terminate. That frees the locks and allows the remaining processes to continue. If
statistics trace class 3 is active, DB2 writes a trace record with IFCID 0172. Reason
| code 00C90088 is returned in the SQLERRD(3) field of the SQLCA. Alternatively,
| you can use the GET DIAGNOSTICS statement to check the reason code. (The
codes that describe DB2’s exact response depend on the operating environment;
for details, see Part 5 of DB2 Application Programming and SQL Guide.)
If you are using IMS, and a deadlock occurs, the following actions take place:
v In a DL/I batch application, the application process abnormally terminates
with a completion code of 04E and a reason code of 00D44033 or
00D44050.
v In any IMS environment except DL/I batch:
– DB2 performs a rollback operation on behalf of your application process
to undo all DB2 updates that occurred during the current unit of work.
– For a non-message driven BMP, IMS issues a rollback operation on
behalf of your application. If this operation is successful, IMS returns
control to your application, and the application receives SQLCODE -911.
If the operation is unsuccessful, IMS issues user abend code 0777, and
the application does not receive an SQLCODE.
– For an MPP, IFP, or message driven BMP, IMS issues user abend code
0777, rolls back all uncommitted changes, and reschedules the
transaction. The application does not receive an SQLCODE.
CICS
If you are using CICS and a deadlock occurs, the CICS attachment facility
decides whether or not to roll back one of the application processes, based on
the value of the ROLBE or ROLBI parameter. If your application process is
chosen for rollback, it receives one of two SQLCODEs in the SQLCA:
-911 A SYNCPOINT command with the ROLLBACK option was
issued on behalf of your application process. All updates
(CICS commands and DL/I calls, as well as SQL statements)
that occurred during the current unit of work have been
undone. (SQLSTATE '40001')
-913 A SYNCPOINT command with the ROLLBACK option was not
issued. DB2 rolls back only the incomplete SQL statement that
encountered the deadlock or timed out. CICS does not roll
back any resources. Your application process should either
issue a SYNCPOINT command with the ROLLBACK option
itself or terminate. (SQLSTATE '57033')
Consider using the DSNTIAC subroutine to check the SQLCODE and display
the SQLCA. Your application must take appropriate actions before resuming.
Plan for batch inserts: If your application does sequential batch insertions,
excessive contention on the space map pages for the table space can occur. This
problem is especially apparent in data sharing, where contention on the space map
means the added overhead of page P-lock negotiation. For these types of
applications, consider using the MEMBER CLUSTER option of CREATE
TABLESPACE. This option causes DB2 to disregard the clustering index (or implicit
clustering index) when assigning space for the SQL INSERT statement. For more
information about using this option in data sharing, see Chapter 6 of DB2 Data
Sharing: Planning and Administration. For the syntax, see Chapter 5 of DB2 SQL
Reference.
Use LOCKSIZE ANY until you have reason not to: LOCKSIZE ANY is the default
for CREATE TABLESPACE. It allows DB2 to choose the lock size, and DB2 usually
chooses LOCKSIZE PAGE and LOCKMAX SYSTEM for non-LOB table spaces. For
LOB table spaces, it chooses LOCKSIZE LOB and LOCKMAX SYSTEM. You
should use LOCKSIZE TABLESPACE or LOCKSIZE TABLE only for read-only table
spaces or tables, or when concurrent access to the object is not needed. Before
you choose LOCKSIZE ROW, you should estimate whether there will be an
increase in overhead for locking and weigh that against the increase in concurrency.
Examine small tables: For small tables with high concurrency requirements,
estimate the number of pages in the data and in the index. If the index entries are
short or they have many duplicates, then the entire index can be one root page and
a few leaf pages. In this case, spread out your data to improve concurrency, or
consider it a reason to use row locks.
Partition the data: Large tables can be partitioned to take advantage of parallelism
for online queries, batch jobs, and utilities. When batch jobs are run in parallel and
each job goes after different partitions, lock contention is reduced. In addition, in a
data sharing environment, data sharing overhead is reduced when applications that
are running on different members go after different partitions.
| However, the use of data-partitioned secondary indexes does not always improve
| the performance of queries. For example, for a query with a predicate that
| references only the columns of a data-partitioned secondary index, DB2 must probe
| each partition of the index for values that satisfy the predicate if index access is
| chosen as the access path. Therefore, take into account data access patterns and
| maintenance practices when deciding to use a data-partitioned secondary index.
Fewer rows of data per page: By using the MAXROWS clause of CREATE or
ALTER TABLESPACE, you can specify the maximum number of rows that can be
on a page. For example, if you use MAXROWS 1, each row occupies a whole
page, and you confine a page lock to a single row. Consider this option if you have
a reason to avoid using row locking, such as in a data sharing environment where
row locking overhead can be greater.
| Taking commit points frequently in a long running unit of recovery (UR) has the
| following benefits at the possible cost of more CPU usage and log write I/Os:
v Reduces lock contention, especially in a data sharing environment
v Improves the effectiveness of lock avoidance, especially in a data sharing
environment
v Reduces the elapsed time for DB2 system restart following a system failure
v Reduces the elapsed time for a unit of recovery to rollback following an
application failure or an explicit rollback request by the application
v Provides more opportunity for utilities, such as online REORG, to break in
Consider using the UR CHECK FREQ field or the UR LOG WRITE CHECK field of
installation panel DSNTIPN to help you identify those applications that are not
committing frequently. UR CHECK FREQ, which identifies when too many
checkpoints have occurred without a UR issuing a commit, is helpful in monitoring
overall system activity. UR LOG WRITE CHECK enables you to detect applications
that might write too many log records between commit points, potentially creating a
lengthy recovery situation for critical tables.
Close cursors: If you define a cursor using the WITH HOLD option, the locks it
needs can be held past a commit point. Use the CLOSE CURSOR statement as
soon as possible in your program to cause those locks to be released and the
resources they hold to be freed at the first commit point that follows the CLOSE
CURSOR statement. Whether page or row locks are held for WITH HOLD cursors
is controlled by the RELEASE LOCKS parameter on installation panel DSNTIP4.
Closing cursors is particularly important in a distributed environment.
Free locators: If you have executed the HOLD LOCATOR statement, the LOB
locator holds locks on LOBs past commit points. Use the FREE LOCATOR
statement to release these locks.
Bind plans with ACQUIRE(USE): ACQUIRE(USE), which indicates that DB2 will
acquire table and table space locks when the objects are first used and not when
the plan is allocated, is the best choice for concurrency. Packages are always
bound with ACQUIRE(USE), by default. ACQUIRE(ALLOCATE) can provide better
protection against timeouts. Consider ACQUIRE(ALLOCATE) for applications that
need gross locks instead of intent locks or that run with other applications that may
request gross locks instead of intent locks. Acquiring the locks at plan allocation
also prevents any one transaction in the application from incurring the cost of
acquiring the table and table space locks. If you need ACQUIRE(ALLOCATE), you
might want to bind all DBRMs directly to the plan.
For information about intent and gross locks, see “The mode of a lock” on page
386.
For more information about the ISOLATION option, see “The ISOLATION option” on
page 394.
For updatable dynamic scrollable cursors and ISOLATION(CS), DB2 holds row or
page locks on the base table (DB2 does not use a temporary global table). The
most recently fetched row or page from the base table remains locked to maintain
data integrity for a positioned update or delete.
For information on how to make an agent part of a global transaction for RRSAF
applications, see Chapter 31, “Programming for the Resource Recovery Services
attachment facility (RRSAF),” on page 831.
| The use of sequences can avoid the lock contention problems that can result when
| applications implement their own sequences, such as in a one-row table that
| contains a sequence number that each transaction must increment. With DB2
| sequences, many users can access and increment the sequence concurrently
| without waiting. DB2 does not wait for a transaction that has incremented a
| sequence to commit before allowing another transaction to increment the sequence
| again.
Knowing the aspects helps you understand why a process suspends or times out or
why two processes deadlock.
As Figure 138 on page 385 suggests, row locks and page locks occupy an equal
place in the hierarchy of lock sizes.
Table space lock Table space lock LOB table space lock
Table lock
Row lock Page lock Row lock Page lock LOB lock
Row lock Page lock Row lock Page lock Row lock Page lock
Effects
For maximum concurrency, locks on a small amount of data held for a short
duration are better than locks on a large amount of data held for a long duration.
However, acquiring a lock requires processor time, and holding a lock requires
storage; thus, acquiring and holding one table space lock is more economical than
acquiring and holding many page locks. Consider that trade-off to meet your
performance and concurrency objectives.
Duration of partition, table, and table space locks: Partition, table, and table
space locks can be acquired when a plan is first allocated, or you can delay
acquiring them until the resource they lock is first used. They can be released at
the next commit point or be held until the program terminates.
On the other hand, LOB table space locks are always acquired when needed and
released at a commit or held until the program terminates. See “LOB locks” on
page 408 for information about locking LOBs and LOB table spaces.
Duration of page and row locks: If a page or row is locked, DB2 acquires the lock
only when it is needed. When the lock is released depends on many factors, but it
is rarely held beyond the next commit point.
For information about controlling the duration of locks, see “Bind options” on page
390 for information about the ACQUIRE and RELEASE, ISOLATION, and
CURRENTDATA bind options.
The possible modes for page and row locks and the modes for partition, table, and
table space locks are listed in “Modes of page and row locks” on page 387 and
“Modes of table, partition, and table space locks” on page 387. See “LOB locks” on
page 408 for more information about modes for LOB locks and locks on LOB table
spaces.
Example: An SQL statement locates John Smith in a table of customer data and
changes his address. The statement locks the entire table space in mode IX and
the specific row that it changes in mode X.
Definition: Locks of some modes do not shut out all other users. Assume that
application process A holds a lock on a table space that process B also wants to
access. DB2 requests, on behalf of B, a lock of some particular mode. If the mode
of A’s lock permits B’s request, the two locks (or modes) are said to be compatible.
Effects of incompatibility: If the two locks are not compatible, B cannot proceed.
It must wait until A releases its lock. (And, in fact, it must wait until all existing
incompatible locks are released.)
Compatible lock modes: Compatibility for page and row locks is easy to define.
Table 45 shows whether page locks of any two modes, or row locks of any two
modes, are compatible (Yes) or not (No). No question of compatibility of a page lock
with a row lock can arise, because a table space cannot use both page and row
locks.
Table 45. Compatibility of page lock and row lock modes
Lock Mode S U X
S Yes Yes No
U Yes No No
X No No No
Compatibility for table space locks is slightly more complex. Table 46 on page 389
shows whether or not table space locks of any two modes are compatible.
The underlying data page or row locks are acquired to serialize the reading and
updating of index entries to ensure the data is logically consistent, meaning that the
data is committed and not subject to rollback or abort. The data locks can be held
for a long duration such as until commit. However, the page latches are only held
for a short duration while the transaction is accessing the page. Because the index
pages are not locked, hot spot insert scenarios (which involve several transactions
trying to insert different entries into the same index page at the same time) do not
cause contention problems in the index.
Lock tuning
This section describes what you can change to affect how a particular application
uses transaction locks, under:
v “Bind options”
v “Isolation overriding with SQL statements” on page 403
v “The statement LOCK TABLE” on page 404
Bind options
These options determine when an application process acquires and releases its
locks and to what extent it isolates its actions from possible effects of other
processes acting concurrently.
Partition locks: Partition locks follow the same rules as table space locks, and all
partitions are held for the same duration. Thus, if one package is using
RELEASE(COMMIT) and another is using RELEASE(DEALLOCATE), all partitions
use RELEASE(DEALLOCATE).
The RELEASE option and dynamic statement caching: Generally, the RELEASE
option has no effect on dynamic SQL statements with one exception. When you use
the bind options RELEASE(DEALLOCATE) and KEEPDYNAMIC(YES), and your
subsystem is installed with YES for field CACHE DYNAMIC SQL on installation
panel DSNTIP4, DB2 retains prepared SELECT, INSERT, UPDATE, and DELETE
statements in memory past commit points. For this reason, DB2 can honor the
RELEASE(DEALLOCATE) option for these dynamic statements. The locks are held
until deallocation, or until the commit after the prepared statement is freed from
memory, in the following situations:
v The application issues a PREPARE statement with the same statement identifier.
v The statement is removed from memory because it has not been used.
v An object that the statement is dependent on is dropped or altered, or a privilege
needed by the statement is revoked.
v RUNSTATS is run against an object that the statement is dependent on.
For partitioned table spaces, lock demotion occurs for each partition for which there
is a lock.
Defaults: The defaults differ for different types of bind operations, as shown in
Table 47 on page 392.
The RELEASE option and DDL operations for remote requesters: When you
perform DDL operations on behalf of remote requesters and
RELEASE(DEALLOCATE) is in effect, be aware of the following condition. When a
package that is bound with RELEASE(DEALLOCATE) accesses data at a server, it
might prevent other remote requesters from performing CREATE, ALTER, DROP,
GRANT, or REVOKE operations at the server.
To allow those operations to complete, you can use the command STOP DDF
MODE(SUSPEND). The command suspends server threads and terminates their
locks so that DDL operations from remote requesters can complete. When these
operations complete, you can use the command START DDF to resume the
suspended server threads. However, even after the command STOP DDF
MODE(SUSPEND) completes successfully, database resources might be held if
DB2 is performing any activity other than inbound DB2 processing. You might have
to use the command CANCEL THREAD to terminate other processing and thereby
free the database resources.
Restriction: This combination is not allowed for BIND PACKAGE. Use this
combination if processing efficiency is more important than concurrency. It is a good
choice for batch jobs that would release table and table space locks only to
reacquire them almost immediately. It might even improve concurrency, by allowing
batch jobs to finish sooner. Generally, do not use this combination if your
application contains many SQL statements that are often not executed.
IMS
A CHKP or SYNC call (for single-mode transactions), a GU call to the I/O
PCB, or a ROLL or ROLB call is completed
CICS
A SYNCPOINT command is issued.
Exception: If the cursor is defined WITH HOLD, table or table space locks
necessary to maintain cursor position are held past the commit point. (See “The
effect of WITH HOLD for a cursor” on page 402 for more information.
Default: The default differs for different types of bind operations, as shown in.
Table 48. The default ISOLATION values for different bind operations
Operation Default value
BIND PLAN ISOLATION(RR)
BIND PACKAGE The value used by the plan that includes the package
in its package list
REBIND PLAN or PACKAGE The existing value for the plan or package being
rebound
For more detailed examples, see DB2 Application Programming and SQL Guide.
Regardless of the isolation level, uncommitted claims on DB2 objects can inhibit the
execution of DB2 utilities or commands.
ISOLATION (CS)
Allows maximum concurrency with data integrity. However, after the process
leaves a row or page, another process can change the data. With
CURRENTDATA(NO), the process doesn’t have to leave a row or page to
allow another process to change the data. If the first process returns to
read the same row or page, the data is not necessarily the same. Consider
these consequences of that possibility:
v For table spaces created with LOCKSIZE ROW, PAGE, or ANY, a
change can occur even while executing a single SQL statement, if the
statement reads the same row more than once. In the following example:
For packages and plans that contain updatable static scrollable cursors,
ISOLATION(CS) lets DB2 use optimistic concurrency control. DB2 can use
optimistic concurrency control to shorten the amount of time that locks are
held in the following situations:
v Between consecutive fetch operations
v Between fetch operations and subsequent positioned update or delete
operations
Figure 139 and Figure 140 on page 396 show processing of positioned
update and delete operations with static scrollable cursors without optimistic
concurrency control and with optimistic concurrency control.
Figure 139. Positioned updates and deletes without optimistic concurrency control
ISOLATION (UR)
Allows the application to read while acquiring few locks, at the risk of
reading uncommitted data. UR isolation applies only to read-only
operations: SELECT, SELECT INTO, or FETCH from a read-only result
table.
Reading uncommitted data introduces an element of uncertainty.
Example: An application tracks the movement of work from station to
station along an assembly line. As items move from one station to another,
the application subtracts from the count of items at the first station and
adds to the count of items at the second. Assume you want to query the
count of items at all the stations, while the application is running
concurrently.
What can happen if your query reads data that the application has changed
but has not committed?
If the application subtracts an amount from one record before adding it
to another, the query could miss the amount entirely.
If the application adds first and then subtracts, the query could add the
amount twice.
If those situations can occur and are unacceptable, do not use UR isolation.
When can you use uncommitted read (UR)? You can probably use UR
isolation in cases like the following ones:
v When errors cannot occur.
Example: A reference table, like a table of descriptions of parts by part
number. It is rarely updated, and reading an uncommitted update is
probably no more damaging than reading the table 5 seconds earlier. Go
ahead and read it with ISOLATION(UR).
Example: The employee table of Spiffy Computer, our hypothetical user.
For security reasons, updates can be made to the table only by members
of a single department. And that department is also the only one that can
query the entire table. It is easy to restrict queries to times when no
updates are being made and then run with UR isolation.
v When an error is acceptable.
Example: Spiffy wants to do some statistical analysis on employee data.
A typical question is, “What is the average salary by sex within education
level?” Because reading an occasional uncommitted record cannot affect
the averages much, UR isolation can be used.
v When the data already contains inconsistent information.
Example: Spiffy gets sales leads from various sources. The data is often
inconsistent or wrong, and end users of the data are accustomed to
dealing with that. Inconsistent access to a table of data on sales leads
does not add to the problem.
Time line
Figure 141. How an application using RS isolation acquires locks when no lock avoidance
techniques are used. Locks L2 and L4 are held until the application commits. The other locks
aren’t held.
Applications using read stability can leave rows or pages locked for long
periods, especially in a distributed environment.
Application
Time line
Figure 142. How an application using RR isolation acquires locks. All locks are held until the
application commits.
Applications that use repeatable read can leave rows or pages locked for
longer periods, especially in a distributed environment, and they can claim
more logical partitions than similar applications using cursor stability.
They are also subject to being drained more often by utility operations.
Because so many locks can be taken, lock escalation might take place.
Frequent commits release the locks and can help avoid lock escalation.
With repeatable read, lock promotion occurs for table space scan to prevent
the insertion of rows that might qualify for the predicate. (If access is via
index, DB2 locks the key range. If access is via table space scans, DB2
locks the table, partition, or table space.)
Local access: Locally, CURRENTDATA(YES) means that the data upon which
the cursor is positioned cannot change while the cursor is positioned on it. If the
cursor is positioned on data in a local base table or index, then the data returned
with the cursor is current with the contents of that table or index. If the cursor is
positioned on data in a work file, the data returned with the cursor is current only
with the contents of the work file; it is not necessarily current with the contents of
the underlying table or index.
Time line
Figure 143. How an application using CS isolation with CURRENTDATA(YES) acquires locks.
This figure shows access to the base table. The L2 and L4 locks are released after DB2
moves to the next row or page. When the application commits, the last lock is released.
As with work files, if a cursor uses query parallelism, data is not necessarily current
with the contents of the table or index, regardless of whether a work file is used.
Therefore, for work file access or for parallelism on read-only queries, the
CURRENTDATA option has no effect.
If you are using parallelism but want to maintain currency with the data, you have
the following options:
v Disable parallelism (Use SET DEGREE = ’1’ or bind with DEGREE(1)).
v Use isolation RR or RS (parallelism can still be used).
v Use the LOCK TABLE statement (parallelism can still be used).
To take the best advantage of this method of avoiding locks, make sure all
applications that are accessing data concurrently issue COMMITs frequently.
Figure 144 on page 401 shows how DB2 can avoid taking locks and Table 54 on
page 401 summarizes the factors that influence lock avoidance.
Time line
DB2
Figure 144. Best case of avoiding locks using CS isolation with CURRENTDATA(NO). This
figure shows access to the base table. If DB2 must take a lock, then locks are released when
DB2 moves to the next row or page, or when the application commits (the same as
CURRENTDATA(YES)).
Table 54. Lock avoidance factors. “Returned data” means data that satisfies the predicate.
“Rejected data” is that which does not satisfy the predicate.
Avoid Avoid
locks on locks on
Isolation CURRENTDATA Cursor type
returned rejected
data? data?
UR N/A Read-only N/A N/A
Read-only
YES Updatable No
Ambiguous
CS Yes
Read-only Yes
NO Updatable No
Ambiguous Yes
Read-only
RS N/A Updatable No Yes 1
Ambiguous
Read-only
RR N/A Updatable No No
Ambiguous
| Note: 1. For RS, locks are avoided on rejected data only when multi-row fetch is
used and when stage 1 predicates fail.
For example, the plan value for CURRENTDATA has no effect on the packages
executing under that plan. If you do not specify a CURRENTDATA option explicitly
when you bind a package, the default is CURRENTDATA(YES).
The rules are slightly different for the bind options RELEASE and ISOLATION. The
values of those two options are set when the lock on the resource is acquired and
usually stay in effect until the lock is released. But a conflict can occur if a
statement that is bound with one pair of values requests a lock on a resource that
is already locked by a statement that is bound with a different pair of values. DB2
resolves the conflict by resetting each option with the available value that causes
the lock to be held for the greatest duration.
Table 55 shows how conflicts between isolation levels are resolved. The first column
is the existing isolation level, and the remaining columns show what happens when
another isolation level is requested by a new application process.
Table 55. Resolving isolation conflicts
UR CS RS RR
UR n/a CS RS RR
CS CS n/a RS RR
RS RS RS n/a RR
RR RR RR RR n/a
For locks and claims that are needed for cursor position, the following exceptions
exist for special cases:
Page and row locks: If your installation specifies NO on the RELEASE LOCKS
field of installation panel DSNTIP4, as described in DB2 Administration Guide, a
page or row lock is held past the commit point. This page or row lock is not
necessary for cursor position, but the NO option is provided for compatibility that
might rely on this lock. However, an X or U lock is demoted to an S lock at that
time. (Because changes have been committed, exclusive control is no longer
needed.) After the commit point, the lock is released at the next commit point,
provided that no cursor is still positioned on that page or row.
Table, table space, and DBD locks: All necessary locks are held past the commit
point. After that, they are released according to the RELEASE option under which
they were acquired: for COMMIT, at the next commit point after the cursor is closed;
for DEALLOCATE, when the application is deallocated.
Claims: All claims, for any claim class, are held past the commit point. They are
released at the next commit point after all held cursors have moved off the object or
have been closed.
finds the maximum, minimum, and average bonus in the sample employee table.
The statement is executed with uncommitted read isolation, regardless of the value
of ISOLATION with which the plan or package containing the statement is bound.
| USE AND KEEP ... LOCKS options of the WITH clause: If you use the WITH RR
| or WITH RS clause, you can use the USE AND KEEP EXCLUSIVE LOCKS, USE
| AND KEEP UPDATE LOCKS, USE AND KEEP SHARE LOCKS options in SELECT
| and SELECT INTO statements.
| Example: To use these options, specify them as shown in the following example:
| SELECT ...
| WITH RS USE KEEP UPDATE LOCKS;
| By using one of these options, you tell DB2 to acquire and hold a specific mode of
| lock on all the qualified pages or rows. Table 56 on page 404 shows which mode of
| lock is held on rows or pages when you specify the SELECT using the WITH RS or
| WITH RR isolation clause.
| With read stability (RS) isolation, a row or page that is rejected during stage 2
| processing might still have a lock held on it, even though it is not returned to the
| application.
| With repeatable read (RR) isolation, DB2 acquires locks on all pages or rows that
| fall within the range of the selection expression.
| All locks are held until the application commits. Although this option can reduce
| concurrency, it can prevent some types of deadlocks and can better serialize
| access to data.
Executing the statement requests a lock immediately, unless a suitable lock exists
already. The bind option RELEASE determines when locks acquired by LOCK
TABLE or LOCK TABLE with the PART option are released.
You can use LOCK TABLE on any table, including auxiliary tables of LOB table
spaces. See “LOCK TABLE statement” on page 411 for information about locking
auxiliary tables.
Caution when using LOCK TABLE with simple table spaces: The statement
locks all tables in a simple table space, even though you name only one table. No
other process can update the table space for the duration of the lock. If the lock is
in exclusive mode, no other process can read the table space, unless that process
is running with UR isolation.
Additional examples of LOCK TABLE: You might want to lock a table or partition
that is normally shared for any of the following reasons:
Taking a“snapshot”
If you want to access an entire table throughout a unit of work as it
was at a particular moment, you must lock out concurrent changes.
If other processes can access the table, use LOCK TABLE IN
SHARE MODE. (RR isolation is not enough; it locks out changes
only from rows or pages you have already accessed.)
Avoiding overhead
If you want to update a large part of a table, it can be more efficient
to prevent concurrent access than to lock each page as it is
updated and unlock it when it is committed. Use LOCK TABLE IN
EXCLUSIVE MODE.
Preventing timeouts
Your application has a high priority and must not risk timeouts from
contention with other application processes. Depending on whether
your application updates or not, use either LOCK IN EXCLUSIVE
MODE or LOCK TABLE IN SHARE MODE.
Access paths
The access path used can affect the mode, size, and even the object of a lock. For
example, an UPDATE statement using a table space scan might need an X lock on
If you use the EXPLAIN statement to investigate the access path chosen for an
SQL statement, then check the lock mode in column TSLOCKMODE of the
resulting PLAN_TABLE. If the table resides in a nonsegmented table space, or is
defined with LOCKSIZE TABLESPACE, the mode shown is that of the table space
lock. Otherwise, the mode is that of the table lock.
IMS
A CHKP or SYNC call, or (for single-mode transactions) a GU call to the
I/O PCB
CICS
A SYNCPOINT command.
LOB locks
The locking activity for LOBs is described separately from transaction locks
because the purpose of LOB locks is different than that of regular transaction locks.
A lock that is taken on a LOB value in a LOB table space is called a LOB lock.
DB2 also obtains locks on the LOB table space and the LOB values stored in that
LOB table space, but those locks have the following primary purposes:
v To determine whether space from a deleted LOB can be reused by an inserted or
updated LOB
Storage for a deleted LOB is not reused until no more readers (including held
locators) are on the LOB and the delete operation has been committed.
v To prevent deallocating space for a LOB that is currently being read
A LOB can be deleted from one application’s point-of-view while a reader from
another application is reading the LOB. The reader continues reading the LOB
because all readers, including those readers that are using uncommitted read
isolation, acquire S-locks on LOBs to prevent the storage for the LOB they are
reading from being deallocated. That lock is held until commit. A held LOB
locator or a held cursor cause the LOB lock and LOB table space lock to be held
past commit.
Table 58 shows the relationship between the action that is occurring on the LOB
value and the associated LOB table space and LOB locks that are acquired.
Table 58. Locks that are acquired for operations on LOBs. This table does not account for
gross locks that can be taken because of LOCKSIZE TABLESPACE, the LOCK TABLE
statement, or lock escalation.
Action on LOB value LOB table space
lock LOB lock Comment
Read (including UR) IS S Prevents storage from being
reused while the LOB is
being read or while locators
are referencing the LOB
Insert IX X Prevents other processes
from seeing a partial LOB
Delete IS S To hold space in case the
delete is rolled back. (The X
is on the base table row or
page.) Storage is not
reusable until the delete is
committed and no other
readers of the LOB exist.
Update IS->IX Two LOB Operation is a delete
locks: an followed by an insert.
S-lock for the
delete and an
X-lock for the
insert.
Update the LOBto null IS S No insert, just a delete.
or zero-length
Update a null or IX X No delete, just an insert.
zero-length LOB to a
value
Duration of locks
Duration of locks on LOB table spaces
Locks on LOB table spaces are acquired when they are needed; that is, the
ACQUIRE option of BIND has no effect on when the table space lock on the LOB
table space is taken. The table space lock is released according to the value
specified on the RELEASE option of BIND (except when a cursor is defined WITH
HOLD or if a held LOB locator exists).
If a cursor is defined WITH HOLD, LOB locks are held through commit operations.
Because LOB locks are held until commit and because locks are put on each LOB
column in both a source table and a target table, it is possible that a statement
such as an INSERT with a fullselect that involves LOB columns can accumulate
many more locks than a similar statement that does not involve LOB columns. To
prevent system problems caused by too many locks, you can:
v Ensure that you have lock escalation enabled for the LOB table spaces that are
involved in the INSERT. In other words, make sure that LOCKMAX is non-zero
for those LOB table spaces.
v Alter the LOB table space to change the LOCKSIZE to TABLESPACE before
executing the INSERT with fullselect.
v Increase the LOCKMAX value on the table spaces involved and ensure that the
user lock limit is sufficient.
v Use LOCK TABLE statements to lock the LOB table spaces. (Locking the
auxiliary table that is contained in the LOB table space locks the LOB table
space.)
If your application intercepts abends, DB2 commits work because it is unaware that
an abend has occurred. If you want DB2 to roll back work automatically when an
abend occurs in your program, do not let the program or run-time environment
intercept the abend. If your program uses Language Environment, and you want
DB2 to roll back work automatically when an abend occurs in the program, specify
the run-time options ABTERMENC(ABEND) and TRAP(ON).
A unit of work is a logically distinct procedure containing steps that change the data.
If all the steps complete successfully, you want the data changes to become
permanent. But, if any of the steps fail, you want all modified data to return to the
original value before the procedure began.
For example, suppose two employees in the sample table DSN8810.EMP exchange
offices. You need to exchange their office phone numbers in the PHONENO
column. You would use two UPDATE statements to make each phone number
current. Both statements, taken together, are a unit of work. You want both
statements to complete successfully. For example, if only one statement is
successful, you want both phone numbers rolled back to their original values before
attempting another update.
When a unit of work completes, all locks that were implicitly acquired by that unit of
work are released, allowing a new unit of work to begin.
The amount of processing time that is used by a unit of work in your program
affects the length of time DB2 prevents other users from accessing that locked
data. When several programs try to use the same data concurrently, each
program’s unit of work should be as short as possible to minimize the interference
between the programs.
This chapter describes the way a unit of work functions in various environments.
For more information about unit of work, see Chapter 1 of DB2 SQL Reference or
Part 4 (Volume 1) of DB2 Administration Guide.
Before you can connect to another DBMS, you must issue a COMMIT statement. If
the system fails at this point, DB2 cannot know that your transaction is complete. In
this case, as in the case of a failure during a one-phase commit operation for a
single subsystem, you must make your own provision for maintaining data integrity.
| You can provide an abend exit routine in your program. It must use tracking
| indicators to determine if an abend occurs during DB2 processing. If an abend does
| occur when DB2 has control, you must allow task termination to complete. DB2
| detects task termination and terminates the thread with the ABRT parameter. Do not
| re-run the program.
If your program abends or the system fails, DB2 backs out uncommitted data
changes. Changed data returns to its original condition without interfering with other
system activities.
| Allowing task termination to complete is the only action that you can take for
| abends that are caused by the CANCEL command or by DETACH. You cannot use
| additional SQL statements at this point. If you attempt to execute another SQL
| statement from the application program or its recovery routine, unexpected errors
| can occur.
Consider the inventory example, in which the quantity of items sold is subtracted
from the inventory file and then added to the reorder file. When both transactions
complete (and not before) and the data in the two files is consistent, the program
By using a SYNCPOINT command with the ROLLBACK option, you can back out
uncommitted data changes. For example, a program that updates a set of related
rows sometimes encounters an error after updating several of them. The program
can use the SYNCPOINT command with the ROLLBACK option to undo all of the
updates without giving up control.
The SQL COMMIT and ROLLBACK statements are not valid in a CICS
environment. You can coordinate DB2 with CICS functions that are used in
programs, so that DB2 and non-DB2 data are consistent.
If the system fails, DB2 backs out uncommitted changes to data. Changed data
returns to its original condition without interfering with other system activities.
Sometimes, DB2 data does not return to a consistent state immediately. DB2 does
not process indoubt data (data that is neither uncommitted nor committed) until the
CICS attachment facility is also restarted. To ensure that DB2 and CICS are
synchronized, restart both DB2 and the CICS attachment facility.
A commit point can occur in a program as the result of any one of the following four
events:
v The program terminates normally. Normal program termination is always a
commit point.
v The program issues a checkpoint call. Checkpoint calls are a program’s means
of explicitly indicating to IMS that it has reached a commit point in its processing.
v The program issues a SYNC call. The SYNC call is a Fast Path system service
call to request commit-point processing. You can use a SYNC call only in a
nonmessage-driven Fast Path program.
v For a program that processes messages as its input, a commit point can occur
when the program retrieves a new message. IMS considers a new message the
start of a new unit of work in the program. Commit points occur given the
following conditions:
– If you specify single-mode, a commit point in DB2 occurs each time the
program issues a call to retrieve a new message. Specifying single-mode can
DB2 does some processing with single- and multiple-mode programs. When a
multiple-mode program issues a call to retrieve a new message, DB2 performs an
authorization check and closes all open cursors in the program.
If the program processes messages, IMS sends the output messages that the
application program produces to their final destinations. Until the program reaches a
commit point, IMS holds the program’s output messages at a temporary destination.
If the program abends, people at terminals and other application programs receive
information from the terminating application program.
The SQL COMMIT and ROLLBACK statements are not valid in an IMS
environment.
If the system fails, DB2 backs out uncommitted changes to data. Changed data
returns to its original state without interfering with other system activities.
Sometimes DB2 data does not return to a consistent state immediately. DB2 does
not process data in an indoubt state until you restart IMS. To ensure that DB2 and
IMS are synchronized, you must restart both DB2 and IMS.
Two calls that are available to IMS programs to simplify program recovery are the
symbolic checkpoint call and the restart call.
Your program must restore these special registers if their values are needed
after the checkpoint.
Programs that issue symbolic checkpoint calls can specify as many as seven data
areas in the program that is to be restored at restart. DB2 always recovers to the
last checkpoint. You must restart the program from that point.
However, message-driven BMPs must issue checkpoint calls rather than get-unique
calls to establish commit points, because they can restart from a checkpoint only. If
a program abends after issuing a get-unique call, IMS backs out the database
updates to the most recent commit point, which is the get-unique call.
The following factors affect the use of checkpoint calls in multiple-mode programs:
v How long it takes to back out and recover that unit of work. The program must
issue checkpoints frequently enough to make the program easy to back out and
recover.
v How long database resources are locked in DB2 and IMS
v How you want the output messages grouped. Checkpoint calls establish how a
multiple-mode program groups its output messages. Programs must issue
checkpoints frequently enough to avoid building up too many output messages.
Checkpoints also close all open cursors, which means that you must reopen the
cursors you want and re-establish positioning.
When you issue a DL/I CHKP call from an application program that uses DB2
databases, IMS processes the CHKP call for all DL/I databases, and DB2 commits
all the DB2 database resources. No checkpoint information is recorded for DB2
databases in the IMS log or the DB2 log. The application program must record
relevant information about DB2 databases for a checkpoint, if necessary.
One way to do this is to put such information in a data area that is included in the
DL/I CHKP call. Undesirable performance implications can be associated with
re-establishing position within a DB2 database as a result of the commit processing
that takes place because of a DL/I CHKP call. The fastest way to re-establish a
position in a DB2 database is to use an index on the target table, with a key that
matches one-to-one with every column in the SQL predicate.
Another limitation of processing DB2 databases in a BMP program is that you can
restart the program only from the latest checkpoint and not from any checkpoint, as
in IMS.
Using ROLL
Issuing a ROLL call causes IMS to terminate the program with a user abend code
U0778. This terminates the program without a storage dump.
When you issue a ROLL call, the only option you supply is the call function, ROLL.
Using ROLB
The advantage of using ROLB is that IMS returns control to the program after
executing ROLB, allowing the program to continue processing. The options for
ROLB are:
v The call function, ROLB
v The name of the I/O PCB
Example: Rolling back to the most recently created savepoint: When the
ROLLBACK TO SAVEPOINT statement is executed in the following code, DB2 rolls
back work to savepoint B.
EXEC
. SQL SAVEPOINT A;
.
.
EXEC
. SQL SAVEPOINT B;
.
.
EXEC SQL ROLLBACK TO SAVEPOINT;
When savepoints are active, you cannot access remote sites using three-part
names or aliases for three-part names. You can, however, use DRDA access with
explicit CONNECT statements when savepoints are active. If you set a savepoint
before you execute a CONNECT statement, the scope of that savepoint is the local
site. If you set a savepoint after you execute the CONNECT statement, the scope
of that savepoint is the site to which you are connected.
Example: Setting a savepoint multiple times: Suppose that the following actions
take place within a unit of work:
1. Application A sets savepoint S.
2. Application A calls stored procedure P.
3. Stored procedure P sets savepoint S.
4. Stored procedure P executes ROLLBACK TO SAVEPOINT S.
When DB2 executes ROLLBACK to SAVEPOINT S, DB2 rolls back work to the
savepoint that was set in the stored procedure because that value is the most
recent value of savepoint S.
If you do not want a savepoint to have different values within a unit of work, you
can use the UNIQUE option in the SAVEPOINT statement. If an application
executes a SAVEPOINT statement for a savepoint that was previously defined as
unique, an SQL error occurs.
Savepoints are automatically released at the end of a unit of work. However, if you
no longer need a savepoint before the end of a transaction, you should execute the
SQL RELEASE SAVEPOINT statement. Releasing savepoints is essential if you
need to use three-part names to access remote locations.
This chapter assumes that you are requesting services from a remote DBMS. That
DBMS is a server in that situation, and your local system is a requester or client.
Your application can be connected to many DBMSs at one time; the one that is
currently performing work is the current server. When the local system is performing
work, it also is called the current server.
A remote server can be truly remote in the physical sense, thousands of miles
away. But that is not necessary; it might even be another subsystem of the same
operating system that your local DBMS runs under. This chapter assumes that your
local DBMS is an instance of DB2 UDB for z/OS. A remote server might be an
instance of DB2 UDB for z/OS also, or it might be an instance of one of many other
products.
A DBMS, whether local or remote, is known to your DB2 system by its location
name. The location name of a remote DBMS is recorded in the communications
database. For more information about location names or the communications
database, see Part 3 of DB2 Installation Guide.)
Example: You can write a query like this to access data at a remote server:
SELECT * FROM CHICAGO.DSN8810.EMP
WHERE EMPNO = ’0001000’;
The mode of access depends on whether you bind your DBRMs into packages and
on the value of field DATABASE PROTOCOL in installation panel DSNTIP5 or the
value of bind option DBPROTOCOL. Bind option DBPROTOCOL overrides the
installation setting.
Example: You can also write statements like these to accomplish the same task:
Before you can execute the query at location CHICAGO, you must bind the
application as a remote package at the CHICAGO server. Before you can run the
| application, you must also bind a local package and a local plan with a package list
| that includes the local and remote package.
Example: You can call a stored procedure, which is a subroutine that can contain
many SQL statements. Your program executes this:
EXEC SQL
CONNECT TO ATLANTA;
EXEC SQL
CALL procedure_name (parameter_list);
The parameter list is a list of host variables that is passed to the stored procedure
and into which it returns the results of its execution. The stored procedure must
already exist at location ATLANTA.
Two methods of access: The preceding examples show two different methods for
accessing distributed data.
v The first example shows a statement that can be executed with DB2 private
protocol access or DRDA access.
If you bind the DBRM that contains the statement into a plan at the local DB2
and specify the bind option DBPROTOCOL(PRIVATE), you access the server
using DB2 private protocol access.
If you bind the DBRM that contains the statement using one of the following
methods, you access the server using DRDA access.
Method 1:
1. Bind the DBRM into a package at the local DB2 using the bind option
DBPROTOCOL(DRDA).
2. Bind the DBRM into a package at the remote location (CHICAGO).
3. Bind the packages into a plan using bind option DBPROTOCOL(DRDA).
Method 2:
1. Bind the DBRM into a package at the remote location.
2. Bind the remote package and the DBRM into a plan at the local site, using
the bind option DBPROTOCOL(DRDA).
v The previous examples show statements that are executed with DRDA access
only.
If you update two or more DBMSs you must consider how updates can be
coordinated, so that units of work at the two DBMSs are either both committed or
both rolled back. Be sure to read “Coordinating updates to two or more data
sources” on page 433.
You can use the resource limit facility at the server to govern distributed SQL
statements. Governing is by plan for DB2 private protocol access and by package
for DRDA access. See “Moving from DB2 private protocol access to DRDA access”
on page 449 for information about changes you need to make to your resource limit
facility tables when you move from DB2 private protocol access to DRDA access.
In a three-part table name, the first part denotes the location. The local DB2 makes
and breaks an implicit connection to a remote server as needed.
| When a three-part name is parsed and forwarded to a remote location, any special
| register settings are automatically propagated to remote server. This allows the SQL
| statements to process the same way no matter at what site a statement is run.
The following overview shows how the application uses three-part names:
Read input values
Do for all locations
Read location name
Set up statement to prepare
Prepare statement
Execute statement
End loop
Commit
After the application obtains a location name, for example ’SAN_JOSE’, it next
creates the following character string:
INSERT INTO SAN_JOSE.DSN8810.PROJ VALUES (?,?,?,?,?,?,?,?)
The application assigns the character string to the variable INSERTX and then
executes these statements:
EXEC SQL
PREPARE STMT1 FROM :INSERTX;
EXEC SQL
EXECUTE STMT1 USING :PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ;
The host variables for Spiffy’s project table match the declaration for the sample
project table in “Project table (DSN8810.PROJ)” on page 904.
To keep the data consistent at all locations, the application commits the work only
when the loop has executed for all locations. Either every location has committed
the INSERT or, if a failure has prevented any location from inserting, all other
locations have rolled back the INSERT. (If a failure occurs during the commit
process, the entire unit of work can be indoubt.)
In this example, Spiffy’s application executes CONNECT for each server in turn,
and the server executes INSERT. In this case, the tables to be updated each have
the same name, although each table is defined at a different server. The application
executes the statements in a loop, with one iteration for each server.
The application connects to each new server by means of a host variable in the
CONNECT statement. CONNECT changes the special register CURRENT SERVER
to show the location of the new server. The values to insert in the table are
transmitted to a location as input host variables.
The following overview shows how the application uses explicit CONNECTs:
Read input values
Do for all locations
Read location name
Connect to location
Execute insert statement
End loop
Commit
Release all
For example, the application inserts a new location name into the variable
LOCATION_NAME and executes the following statements:
EXEC SQL
CONNECT TO :LOCATION_NAME;
EXEC SQL
INSERT INTO DSN8810.PROJ VALUES (:PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ);
To keep the data consistent at all locations, the application commits the work only
when the loop has executed for all locations. Either every location has committed
the INSERT or, if a failure has prevented any location from inserting, all other
locations have rolled back the INSERT. (If a failure occurs during the commit
process, the entire unit of work can be indoubt.)
The host variables for Spiffy’s project table match the declaration for the sample
project table in “Project table (DSN8810.PROJ)” on page 904. LOCATION_NAME is
a character-string variable of length 16.
| For example, suppose that an employee database is deployed across two sites:
| SVL_EMPLOYEE and SJ_EMPLOYEE. To access each site, insert a row for each
| site into SYSIBM.LOCATIONS. Both rows contain EMPLOYEE as the DBALIAS
| value. When an application issues a CONNECT TO SVL_EMPLOYEE statement,
| DB2 searches the SYSIBM.LOCATIONS table to retrieve the location and network
| If the application uses fully qualified object names in its SQL statements, DB2
| sends the statements to the remote server without modification. For example,
| suppose that the application issues the statement SELECT * FROM
| SVL_EMPLOYEE.authid.table with the fully-qualified object name. However, DB2
| accesses the remote server by using the EMPLOYEE alias. The remote server
| must identify itself as both SVL_EMPLOYEE and EMPLOYEE; otherwise, it rejects
| the SQL statement with a message indicating that the database is not found.
Releasing connections
When you connect to remote locations explicitly, you must also break those
connections explicitly. You have considerable flexibility in determining how long
connections remain open, so the RELEASE statement differs significantly from
CONNECT.
Example: RELEASE statement: Using the RELEASE statement, you can place
any of the following in the release-pending state.
v A specific connection that the next unit of work does not use:
EXEC SQL RELEASE SPIFFY1;
v The current SQL connection, whatever its location name:
EXEC SQL RELEASE CURRENT;
v All connections except the local connection:
EXEC SQL RELEASE ALL;
v All DB2 private protocol connections. If the first phase of your application
program uses DB2 private protocol access and the second phase uses DRDA
access, open DB2 private protocol connections from the first phase could cause
a CONNECT operation to fail in the second phase. To prevent that error, execute
the following statement before the commit operation that separates the two
phases:
EXEC SQL RELEASE ALL PRIVATE;
PRIVATE refers to DB2 private protocol connections, which exist only between
instances of DB2 UDB for z/OS.
Three-part names and multiple servers: If you use a three-part name, or an alias
that resolves to one, in a statement executed at a remote server by DRDA access,
and if the location name is not that of the server, then the method by which the
remote server accesses data at the named location depends on the value of
DBPROTOCOL. If the package at the first remote server is bound with
DBPROTOCOL(PRIVATE), DB2 uses DB2 private protocol access to access the
second remote server. If the package at the first remote server is bound with
DBPROTOCOL(DRDA), DB2 uses DRDA access to access the second remote
server. The following steps are recommended so that access to the second remote
server is by DRDA access:
1. Rebind the package at the first remote server with DBPROTOCOL(DRDA).
2. Bind the package that contains the three-part name at the second server.
However, you cannot perform the following series of actions, which includes a
backward reference to the declared temporary table:
EXEC SQL
DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */
(CHARCOL CHAR(6) NOT NULL); /* at the local site (ATLANTA)*/
EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */
EXEC SQL INSERT INTO ATLANTA.SESSION.T1
(VALUES ’ABCDEF’); /* Cannot access temp table */
/* from the remote site (backward reference)*/
Savepoints: In a distributed environment, you can set savepoints only if you use
DRDA access with explicit CONNECT statements. If you set a savepoint and then
execute an SQL statement with a three-part name, an SQL error occurs.
For more information about savepoints, see “Using savepoints to undo selected
changes within a unit of work” on page 421.
DB2 and IMS, and DB2 and CICS, jointly implement a two-phase commit process.
You can update an IMS database and a DB2 table in the same unit of work. If a
system or communication failure occurs between committing the work on IMS and
on DB2, the two programs restore the two systems to a consistent point when
activity resumes.
For more information about the two-phase commit process, see Part 4 (Volume 1)
of DB2 Administration Guide.
To achieve the effect of coordinated updates with a restricted system, you must first
update one system and commit that work, and then update the second system and
commit its work. If a failure occurs after the first update is committed and before the
second update is committed, no automatic provision exists for bringing the two
systems back to a consistent point. Your program must perform that task.
Restricting to CONNECT (type 1): You can also restrict your program completely
to the rules for restricted systems, by using the type 1 rules for CONNECT. To put
those rules into effect for a package, use the precompiler option CONNECT(1). Be
careful not to use packages precompiled with CONNECT(1) and packages
precompiled with CONNECT(2) in the same package list. The first CONNECT
statement executed by your program determines which rules are in effect for the
entire execution: type 1 or type 2. An attempt to execute a later CONNECT
statement that is precompiled with the other type returns an error.
For more information about CONNECT (Type 1) and about managing connections
to other systems, see Chapter 1 of DB2 SQL Reference.
Use LOB locators instead of LOB host variables: If you need to store only a
portion of a LOB value at the client, or if your client program manipulates the LOB
data but does not need a copy of it, LOB locators are a good choice. When a client
program retrieves a LOB column from a server into a locator, DB2 transfers only the
4-byte locator value to the client, not the entire LOB value. For information about
how to use LOB locators in an application, see “Using LOB locators to save
storage” on page 288.
Use stored procedure result sets: When you return LOB data to a client program
from a stored procedure, use result sets, rather than passing the LOB data to the
client in parameters. Using result sets to return data causes less LOB
materialization and less movement of data among address spaces. For information
about how to write a stored procedure to return result sets, see “Writing a stored
procedure to return result sets to a DRDA client” on page 590. For information
about how to write a client program to receive result sets, see “Writing a DB2 UDB
for z/OS client program or SQL procedure to receive result sets” on page 648.
Set the CURRENT RULES special register to DB2: When a DB2 UDB for z/OS
server receives an OPEN request for a cursor, the server uses the value in the
CURRENT RULES special register to determine the type of host variables the
associated statement uses to retrieve LOB values. If you specify a value of DB2 for
CURRENT RULES before you perform a CONNECT, and the first FETCH for the
cursor uses a LOB locator to retrieve LOB column values, DB2 lets you use only
LOB locators for all subsequent FETCH statements for that column until you close
the cursor. If the first FETCH uses a host variable, DB2 lets you use only host
variables for all subsequent FETCH statements for that column until you close the
cursor. However, if you set the value of CURRENT RULES to STD, DB2 lets you
use the same open cursor to fetch a LOB column into either a LOB locator or a
host variable.
For example, an end user might want to browse through a large set of employee
records but want to look at pictures of only a few of those employees. At the server,
you set the CURRENT RULES special register to DB2. In the application, you
declare and open a cursor to select employee records. The application then fetches
all picture data into 4-byte LOB locators. Because DB2 knows that 4 bytes of LOB
data is returned for each FETCH, DB2 can fill the network buffers with locators for
many pictures. When a user wants to see a picture for a particular person, the
application can retrieve the picture from the server by assigning the value that is
referenced by the LOB locator to a LOB host variable:
SQL TYPE IS BLOB my_blob[1M];
SQL
. TYPE IS BLOB AS LOCATOR my_loc;
.
.
FETCH
. C1 INTO :my_loc; /* Fetch BLOB into LOB locator */
.
.
SET :my_blob = :my_loc; /* Assign BLOB to host variable */
DEFER(PREPARE)
To improve performance for both static and dynamic SQL used in DB2 private
protocol access, and for dynamic SQL in DRDA access, consider specifying the
option DEFER(PREPARE) when you bind or rebind your plans or packages.
Remember that statically bound SQL statements in DB2 private protocol access are
processed dynamically. When a dynamic SQL statement accesses remote data, the
PREPARE and EXECUTE statements can be transmitted over the network together
and processed at the remote location. Responses to both statements can be sent
together back to the local subsystem, thus reducing traffic on the network. DB2
does not prepare the dynamic SQL statement until the statement executes. (The
exception to this is dynamic SELECT, which combines PREPARE and DESCRIBE,
regardless of whether the DEFER(PREPARE) option is in effect.)
All PREPARE messages for dynamic SQL statements that refer to a remote object
will be deferred until one of these events occurs:
v The statement executes
v The application requests a description of the results of the statement
When you use predictive governing, the SQL code returned to the requester if the
server exceeds a predictive governing warning threshold depends on the level of
DRDA at the requester. See “Writing an application to handle predictive governing”
on page 544 for more information.
For DB2 private protocol access, when a static SQL statement refers to a remote
object, the transparent PREPARE statement and the EXECUTE statements are
automatically combined and transmitted across the network together. The
PREPARE statement is deferred only if you specify the bind option
DEFER(PREPARE).
PKLIST
The order in which you specify package collections in a package list can affect the
performance of your application program. When a local instance of DB2 attempts to
execute an SQL statement at a remote server, the local DB2 subsystem must
determine which package collection the SQL statement is in. DB2 must send a
message to the server, requesting that the server check each collection ID for the
SQL statement, until the statement is found or no more collection IDs are in the
package list. You can reduce the amount of network traffic, and thereby improve
performance, by reducing the number of package collections that each server must
search. The following examples show ways to reduce the collections to search:
v Reduce the number of packages per collection that must be searched. The
following example specifies only one package in each collection:
PKLIST(S1.COLLA.PGM1, S1.COLLB.PGM2)
v Reduce the number of package collections at each location that must be
searched. The following example specifies only one package collection at each
location:
PKLIST(S1.COLLA.*, S2.COLLB.*)
v Reduce the number of collections that are used for each application. The
following example specifies only one collection to search:
PKLIST(*.COLLA.*)
You can also specify the package collection that is associated with an SQL
statement in your application program. Execute the SQL statement SET CURRENT
PACKAGESET before you execute an SQL statement to tell DB2 which package
collection to search for the statement.
When you use DEFER(PREPARE) with DRDA access, the package containing the
statements whose preparation you want to defer must be the first qualifying entry in
the package search sequence that DB2 uses. (See “Identifying packages at run
time” on page 475 for more information.) For example, assume that the package list
for a plan contains two entries:
PKLIST(LOCB.COLLA.*, LOCB.COLLB.*)
If the intended package is in collection COLLB, ensure that DB2 searches that
collection first. You can do this by executing the SQL statement:
SET CURRENT PACKAGESET = ’COLLB’;
For NODEFER(PREPARE), the collections in the package list can be in any order,
but if the package is not found in the first qualifying PKLIST entry, the result is
significant network overhead for searching through the list.
REOPT(ALWAYS)
| When you specify REOPT(ALWAYS), DB2 determines access paths at both bind
time and run time for statements that contain one or more of the following variables:
v Host variables
v Parameter markers
v Special registers
At run time, DB2 uses the values in those variables to determine the access paths.
| If you specify the bind option REOPT(ALWAYS) or REOPT(ONCE), DB2 sets the
| bind option DEFER(PREPARE) automatically. However, when you specify
| REOPT(ONCE), DB2 determines the access path for a statement only once (at the
| first run time).
Because of performance costs when DB2 reoptimizes the access path at run time,
you should use one of the following bind options:
| v REOPT(ALWAYS) — use this option only on packages or plans that contain
| statements that perform poorly because of a bad access path.
| v REOPT(ONCE) — use this option when the following conditions are true:
| – You are using the dynamic statement cache.
| – You have plans or packages that contain dynamic SQL statements that
| perform poorly because of access path selection.
| – Your dynamic SQL statements are executed many times with possibly different
| input variables.
| v REOPT(NONE) — use this option when you bind a plan or package that contains
| statements that use DB2 private protocol access.
| If you specify REOPT(ALWAYS) when you bind a plan that contains statements that
use DB2 private protocol access to access remote data, DB2 prepares those
statements twice. See “How bind options REOPT(ALWAYS) and REOPT(ONCE)
| affect dynamic SQL” on page 567 for more information about REOPT(ALWAYS).
CURRENTDATA(NO)
Use this bind option to force block fetch for ambiguous queries. See “Use block
fetch” on page 440 for more information about block fetch.
KEEPDYNAMIC(YES)
Use this bind option to improve performance for queries that use cursors defined
WITH HOLD. With KEEPDYNAMIC(YES), DB2 automatically closes the cursor
when no more data exists for retrieval. The client does not need to send a network
message to tell DB2 to close the cursor. For more information about
KEEPDYNAMIC(YES), see “Keeping prepared statements after commit points” on
page 541.
For more information about block fetch, see Part 5 (Volume 2) of DB2
Administration Guide.
How to ensure block fetching: To use either type of block fetch, DB2 must
determine that the cursor is not used for updating or deleting. Indicate that the
cursor does not modify data by adding FOR FETCH ONLY or FOR READ ONLY to
the query in the DECLARE CURSOR statement. If you do not use FOR FETCH
ONLY or FOR READ ONLY, DB2 still uses block fetch for the query if any of the
following conditions are true:
v The cursor is a non-scrollable cursor, and the result table of the cursor is
read-only. (See Chapter 5 of DB2 SQL Reference for a description of read-only
tables.)
v The cursor is a scrollable cursor that is declared as INSENSITIVE, and the result
table of the cursor is read-only.
v The cursor is a scrollable cursor that is declared as SENSITIVE, the result table
of the cursor is read-only, and the value of bind option CURRENTDATA is NO.
v The result table of the cursor is not read-only, but the cursor is ambiguous, and
the value of bind option CURRENTDATA is NO. A cursor is ambiguous when any
of the following conditions are true:
– It is not defined with the clauses FOR FETCH ONLY, FOR READ ONLY, or
FOR UPDATE.
– It is not defined on a read-only result table.
– It is not the target of a WHERE CURRENT clause on an SQL UPDATE or
DELETE statement.
– It is in a plan or package that contains the SQL statements PREPARE or
EXECUTE IMMEDIATE.
Table 61 summarizes the conditions under which a DB2 server uses block fetch for
a scrollable cursor when the cursor is used to retrieve result sets.
When a DB2 UDB for z/OS requester uses a scrollable cursor to retrieve data from
a DB2 UDB for z/OS server, the following conditions are true:
v The requester never requests more than 64 rows in a query block, even if more
rows fit in the query block. In addition, the requester never requests extra query
blocks. This is true even if the setting of field EXTRA BLOCKS REQ in the
DISTRIBUTED DATA FACILITY PANEL 2 installation panel on the requester
allows extra query blocks to be requested. If you want to limit the number of rows
that the server returns to fewer than 64, you can specify the FETCH FIRST n
ROWS ONLY clause when you declare the cursor.
v The requester discards rows of the result table if the application does not use
those rows. For example, if the application fetches row n and then fetches row
n+2, the requester discards row n+1. The application gets better performance for
a blocked scrollable cursor if it mostly scrolls forward, fetches most of the rows in
a query block, and avoids frequent switching between FETCH ABSOLUTE
statements with negative and positive values.
v If the scrollable cursor does not use block fetch, the server returns one row for
each FETCH statement.
The number of rows that DB2 transmits on each network transmission depends on
the following factors:
v If n rows of the SQL result set fit within a single DRDA query block, a DB2 server
can send n rows to any DRDA client. In this case, DB2 sends n rows in each
network transmission, until the entire query result set is exhausted.
v If n rows of the SQL result set exceed a single DRDA query block, the number of
rows that are contained in each network transmission depends on the client’s
DRDA software level and configuration:
– If the client does not support extra query blocks, the DB2 server automatically
reduces the value of n to match the number of rows that fit within a DRDA
query block.
Specifying a large value for n in OPTIMIZE FOR n ROWS can increase the number
of DRDA query blocks that a DB2 server returns in each network transmission. This
function can significantly improve performance for applications that use DRDA
access to download large amounts of data. However, this same function can
degrade performance if you do not use it properly. The following examples
demonstrate the performance problems that can occur when you do not use
OPTIMIZE FOR n ROWS judiciously.
In Figure 146 on page 444,, the DRDA client opens a cursor and fetches rows from
the cursor. At some point before all rows in the query result set are returned, the
application issues an SQL INSERT. DB2 uses normal DRDA blocking, which has
two advantages over the blocking that is used for OPTIMIZE FOR n ROWS:
v If the application issues an SQL statement other than FETCH (the example
shows an INSERT statement), the DRDA client can transmit the SQL statement
immediately, because the DRDA connection is not in use after the SQL OPEN.
v If the SQL application closes the cursor before fetching all the rows in the query
result set, the server fetches only the number of rows that fit in one query block,
which is 100 rows of the result set. Basically, the DRDA query block size places
an upper limit on the number of rows that are fetched unnecessarily.
In Figure 147 on page 445,, the DRDA client opens a cursor and fetches rows from
the cursor using OPTIMIZE FOR n ROWS. Both the DRDA client and the DB2
server are configured to support multiple DRDA query blocks. At some time before
the end of the query result set, the application issues an SQL INSERT. Because
OPTIMIZE FOR n ROWS is being used, the DRDA connection is not available
when the SQL INSERT is issued because the connection is still being used to
receive the DRDA query blocks for 1000 rows of data. This causes two
performance problems:
v Application elapsed time can increase if the DRDA client waits for a large query
result set to be transmitted, before the DRDA connection can be used for other
SQL statements. Figure 147 on page 445 shows how an SQL INSERT statement
can be delayed because of a large query result set.
v If the application closes the cursor before fetching all the rows in the SQL result
set, the server might fetch a large number of rows unnecessarily.
For more information about OPTIMIZE FOR n ROWS, see “Minimizing overhead for
retrieving few rows: OPTIMIZE FOR n ROWS” on page 714.
| If you specify FETCH FIRST n ROWS ONLY and do not specify OPTIMIZE FOR n
| ROWS, the access path for the statement will use the value that is specified for
| FETCH FIRST n ROWS ONLY for optimization. However, DRDA will not consider
| the value when it determines network blocking.
| When you specify both the FETCH FIRST n ROWS ONLY clause and the
| OPTIMIZE FOR m ROWS clause in a statement, DB2 uses the value that you
| specify for OPTIMIZE FOR m ROWS, even if that value is larger than the value that
| you specify for the FETCH FIRST n ROWS ONLY clause.
Fast implicit close and FETCH FIRST n ROWS ONLY: Fast implicit close means
that during a distributed query, the DB2 server automatically closes the cursor after
it prefetches the nth row if you specify FETCH FIRST n ROWS ONLY, or when
there are no more rows to return. Fast implicit close can improve performance
because it saves an additional network transmission between the client and the
server.
DB2 uses fast implicit close when the following conditions are true:
v The query uses limited block fetch.
v The query retrieves no LOBs.
v The cursor is not a scrollable cursor.
v Either of the following conditions is true:
– The cursor is declared WITH HOLD, and the package or plan that contains
the cursor is bound with the KEEPDYNAMIC(YES) option.
– The cursor is not defined WITH HOLD.
When you use FETCH FIRST n ROWS ONLY, and DB2 does a fast implicit close,
the DB2 server closes the cursor after it prefetches the nth row, or when there are
no more rows to return.
For OPTIMIZE FOR n ROWS, when n is 1, 2, or 3, DB2 uses the value 16 (instead
of n) for network blocking and prefetches 16 rows. As a result, network usage is
more efficient even though DB2 uses the small value of n for query optimization.
Suppose that you need only one row of the result table. To avoid 15 unnecessary
prefetches, add the FETCH FIRST 1 ROW ONLY clause:
SELECT * FROM EMP
OPTIMIZE FOR 1 ROW ONLY
FETCH FIRST 1 ROW ONLY;
How to prevent block fetching: If your application requires data currency for a
cursor, you need to prevent block fetching for the data it points to. To prevent block
fetching for a distributed cursor, declare the cursor with the FOR UPDATE clause.
Converting mixed data: When ASCII MIXED data or Unicode MIXED data is
converted to EBCDIC MIXED, the converted string is longer than the source string.
An error occurs if that conversion is performed on a fixed-length input host variable.
The remedy is to use a varying-length string variable with a maximum length that is
sufficient to contain the expansion.
Identifying the server at run time: The special register CURRENT SERVER
contains the location name of the system you are connected to. You can assign that
name to a host variable with a statement like this:
EXEC SQL SET :CS = CURRENT SERVER;
The encoding scheme in which DB2 returns data depends on two factors:
v The encoding scheme of the requesting system.
If the requester is ASCII or Unicode, the returned data is ASCII or Unicode. If the
requester is EBCDIC, the returned data is EBCDIC, even though it is stored at
the server as ASCII or Unicode. However, if the SELECT statement that is used
to retrieve the data contains an ORDER BY clause, the data displays in ASCII or
Unicode order.
v Whether the application program overrides the CCSID for the returned data. The
ways to do this are as follows:
– For static SQL
You can bind a plan or package with the ENCODING bind option to control
the CCSIDs for all static data in that plan or package. For example, if you
specify ENCODING(UNICODE) when you bind a package at a remote DB2
UDB for z/OS system, the data that is returned in host variables from the
remote system is encoded in the default Unicode CCSID for that system.
See Part 2 of DB2 Command Reference for more information about the
ENCODING bind options.
– For static or dynamic SQL
An application program can specify overriding CCSIDs for individual host
variables in DECLARE VARIABLE statements. See “Changing the coded
character set ID of host variables” on page 77 for information about how to
specify the CCSID for a host variable.
An application program that uses an SQLDA can specify an overriding CCSID
for the returned data in the SQLDA. When the application program executes a
FETCH statement, you receive the data in the CCSID that is specified in the
SQLDA. See “Changing the CCSID for retrieved data” on page 561 for
information about how to specify an overriding CCSID in an SQLDA.
| Additionally, you must include a package at the server for any SQL statements that
| run at the server. See Appendix B of DB2 SQL Reference for information about
| which SQL statements are processed at the requester.
Before you can run DB2 applications of the first type, you must precompile,
compile, link-edit, and bind them. This chapter details the steps needed to prepare
and run this type of application program; see “Steps in program preparation” on
page 454.
You can control the steps in program preparation by using the following methods:
v “Using JCL procedures to prepare applications” on page 489
v “Using ISPF and DB2 Interactive (DB2I)” on page 495
For information about applications with ODBC calls that pass dynamic SQL
statements as arguments, see DB2 ODBC Guide and Reference.
For information about running REXX programs, which you do not prepare for
execution, see “Running a DB2 REXX application” on page 489.
For information about preparing and running Java programs, see DB2 Application
Programming Guide and Reference for Java.
Productivity hint: To avoid rework, first test your SQL statements using SPUFI,
and then compile your program without SQL statements and resolve all compiler
errors. Then proceed with the preparation and the DB2 precompile and bind steps.
| Notes:
| 1. The DBRM is created with NEWFUN(NO), which prevents the use of DB2 Version 8 functions.
| 2. The DBRM is created with NEWFUN(YES), although the program does not use any DB2 Version 8 functions.
| 3. The DBRM is created with NEWFUN(YES), and the program uses DB2 Version 8 functions.
|
For more information about the NEWFUN option, see Table 63 on page 462. For
information about DB2 Version 8 new-function mode, see DB2 Installation Guide.
| For C, C++, COBOL, or PL/I applications, you can use one of the following
techniques to process SQL statements:
v Use the DB2 precompiler before you compile your program.
You can use this technique with any version of C or C++, COBOL, or PL/I.
v Invoke the SQL statement coprocessor for the host language that you are using
as you compile your program:
| – For C or C++, invoke the SQL statement coprocessor by specifying the SQL
| compiler option. You need C/C++ for z/OS Version 1 Release 5 or later. See
| “Using the SQL statement coprocessor for C programs” on page 458 or
| “Using the SQL statement coprocessor for C++ programs” on page 459. For
| more information about using the C/C++ SQL statement coprocessor, see
| z/OS C/C++ User's Guide .
– For COBOL, invoke the SQL statement coprocessor by specifying the SQL
compiler option. You need Enterprise COBOL for z/OS and OS/390 Version 2
Release 2 or later to use this technique. See “Using the SQL statement
coprocessor for COBOL programs” on page 460. For more information about
using the COBOL SQL statement coprocessor, see IBM COBOL for MVS &
VM Programming Guide.
– For PL/I, invoke the SQL statement coprocessor by specifying the
PP(SQL(’option,...’)) compiler option. You need Enterprise PL/I for z/OS and
OS/390 Version 3 Release 1 or later to use this technique. See “Using the
SQL statement coprocessor for PL/I programs” on page 461. For more
information about using the PL/I SQL statement coprocessor, see IBM
Enterprise PL/I for z/OS and OS/390 Programming Guide
In this section, references to the SQL statement processor apply to either the DB2
precompiler or the SQL statement coprocessor. References to the DB2 precompiler
apply specifically to the precompiler that is provided with DB2.
CICS
If the application contains CICS commands, you must translate the program
before you compile it. (See “Translating command-level statements in a CICS
program” on page 470.)
When you precompile your program, DB2 does not need to be active. The
precompiler does not validate the names of tables and columns that are used in
SQL statements. However, the precompiler checks table and column references
against SQL DECLARE TABLE statements in the program. Therefore, you should
use DCLGEN to obtain accurate SQL DECLARE TABLE statements.
Input to the precompiler: The primary input for the precompiler consists of
statements in the host programming language and embedded SQL statements.
Important
The size of a source program that DB2 can precompile is limited by the region
size and the virtual memory available to the precompiler. The maximum region
size and memory available to the DB2 precompiler is usually around 8 MB, but
these amounts vary with each system installation.
You can use the SQL INCLUDE statement to get secondary input from the include
library, SYSLIB. The SQL INCLUDE statement reads input from the specified
member of SYSLIB until it reaches the end of the member.
Another preprocessor, such as the PL/I macro preprocessor, can generate source
statements for the precompiler. Any preprocessor that runs before the precompiler
must be able to pass on SQL statements. Similarly, other preprocessors can
process the source code, after you precompile and before you compile or
assemble.
There are limits on the forms of source statements that can pass through the
precompiler. For example, constants, comments, and other source syntax that are
not accepted by the host compilers (such as a missing right brace in C) can
interfere with precompiler source scanning and cause errors. You might want to run
the host compiler before the precompiler to find the source statements that are
unacceptable to the host compiler. At this point you can ignore the compiler error
messages for SQL statements. After the source statements are free of unacceptable
compiler errors, you can then perform the normal DB2 program preparation process
for that host language.
Output from the precompiler: The following sections describe various kinds of
output from the precompiler.
Listing output: The output data set, SYSPRINT, used to print output from the
| precompiler, has an LRECL of 133 and a RECFM of FBA. This data set uses the
| CCSID of the source program. Statement numbers in the output of the precompiler
listing always display as they appear in the listing. However, DB2 stores statement
numbers greater than 32767 as 0 in the DBRM.
The DB2 precompiler writes the following information in the SYSPRINT data set:
Modified source statements: The DB2 precompiler writes the source statements
that it processes to SYSCIN, the input data set to the compiler or assembler. This
data set must have attributes RECFM F or FB, and LRECL 80. The modified source
code contains calls to the DB2 language interface. The SQL statements that the
| calls replace appear as comments. This data set uses the CCSID of the source
| program.
Database request modules: The major output from the precompiler is a database
request module (DBRM). That data set contains the SQL statements and host
variable information extracted from the source program, along with information that
identifies the program and ties the DBRM to the translated source statements. It
becomes the input to the bind process.
The data set requires space to hold all the SQL statements plus space for each
host variable name and some header information. The header information alone
requires approximately two records for each DBRM, 20 bytes for each SQL record,
and 6 bytes for each host variable.
For an exact format of the DBRM, see the DBRM mapping macros, DSNXDBRM
| and DSNXNBRM in library prefix.SDSNMACS. The DCB attributes of the data set
are RECFM FB, LRECL 80. The precompiler sets the characteristics. You can use
IEBCOPY, IEHPROGM, TSO commands COPY and DELETE, or other PDS
management tools for maintaining these data sets.
| In a DBRM, the SQL statements and the list of host variable names use the
| following character encoding schemes:
| v EBCDIC, for the result of a DB2 Version 8 precompilation with NEWFUN NO or a
| precompilation in an earlier release of DB2
| v Unicode UTF-8, for the result of a DB2 Version 8 precompilation with NEWFUN
| YES
All other character fields in a DBRM use EBCDIC. The current release marker
(DBRMMRIC) in the header of a DBRM is marked according to the release of the
precompiler, regardless of the value of NEWFUN. In a Version 8 precompilation, the
DBRM dependency marker (DBRMPDRM) in the header of a DBRM is marked for
Version 8 if the value of NEWFUN is YES; otherwise, it is not marked for Version 8.
To use the SQL statement coprocessor, you need to do the following things:
v Specify the following options when you compile your program:
– SQL
The SQL compiler option indicates that you want the compiler to invoke the
SQL statement coprocessor. Specify a list of SQL processing options in
parentheses after the SQL keyword. The list is enclosed in single or double
quotation marks. Table 63 on page 462 lists the options that you can specify.
For example, suppose that you want to process SQL statements as you
compile a C program. In your program, the apostrophe is the string delimiter
in SQL statements, and the SQL statements conform to DB2 rules. This
means that you need to specify the APOSTSQL and STDSQL(NO) options.
Therefore, you need to include this option in your compile step:
SQL("APOSTSQL STDSQL(NO)")
– LIMITS(FIXEDBIN(63), FIXEDDEC(31))
These options are required for LOB support.
– SIZE(nnnnnn)
You might need to increase the SIZE value (nnnnnn) so that the user region is
large enough for the SQL statement coprocessor. Do not specify SIZE(MAX).
v Include DD statements for the following data sets in the JCL for your compile
step:
– DB2 load library (prefix.SDSNLOAD)
| To use the SQL statement coprocessor, you need to do the following things:
| v Specify the following options when you compile your program:
| – SQL
| The SQL compiler option indicates that you want the compiler to invoke the
| SQL statement coprocessor. Specify a list of SQL processing options in
| parentheses after the SQL keyword. The list is enclosed in single or double
| quotation marks. Table 63 on page 462 lists the options that you can specify.
| For example, suppose that you want to process SQL statements as you
| compile a C++ program. In your program, the apostrophe is the string
| delimiter in SQL statements, and the SQL statements conform to DB2 rules.
| This means that you need to specify the APOSTSQL and STDSQL(NO)
| options. Therefore, you need to include the following option in your compile
| step:
| SQL("APOSTSQL STDSQL(NO)")
| – The following options are required for LOB support:
| LIMITS(FIXEDBIN(63), FIXEDDEC(31))
| – You might need to increase the SIZE value so that the user region is large
| enough for the SQL statement coprocessor. To increase the SIZE value,
| specify the following option, where nnnnnn is the SIZE value that you want:
| SIZE(nnnnnn)
To use the SQL statement coprocessor, you need to do the following things:
v Specify the following options when you compile your program:
– SQL
The SQL compiler option indicates that you want the compiler to invoke the
SQL statement coprocessor. Specify a list of SQL processing options in
parentheses after the SQL keyword. The list is enclosed in single or double
quotes. Table 63 on page 462 lists the options that you can specify.
For example, suppose that you want to process SQL statements as you
compile a COBOL program. In your program, the apostrophe is the string
delimiter in SQL statements, and the SQL statements conform to DB2 rules.
This means that you need to specify the APOSTSQL and STDSQL(NO)
options. Therefore, you need to include this option in your compile step:
SQL("APOSTSQL STDSQL(NO)")
– LIB
You need to specify the LIB option when you specify the SQL option, whether
or not you have any COPY, BASIS, or REPLACE statements in your program.
– LIMITS(FIXEDBIN(63), FIXEDDEC(31))
These options are required for LOB support.
– SIZE(nnnnnn)
You might need to increase the SIZE value (nnnnnn) so that the user region is
large enough for the SQL statement coprocessor. Do not specify SIZE(MAX).
To use the SQL statement preprocessor, you need to do the following things:
v Specify the following options when you compile your program, using the
Enterprise PL/I for z/OS and OS/390 Version 3 Release 1 or later:
– PP(SQL(’option,...’))
This compiler option indicates that you want the compiler to invoke the SQL
statement preprocessor. Specify a list of SQL processing options in
parentheses after the SQL keyword. The list is enclosed in single or double
quotation marks. Separate options in the list by a comma, blank, or both.
Table 63 on page 462 lists the options that you can specify.
For example, suppose that you want to process SQL statements as you
compile a PL/I program. In your program, the DATE data types require USA
format, and the SQL statements conform to DB2 rules. This means that you
need to specify the DATE(USA) and STDSQL(NO) options. Therefore, you
need to include this option in your compile step:
PP(SQL(’DATE(USA), STDSQL(NO)’))
– LIMITS(FIXEDBIN(63), FIXEDDEC(31))
These options are required for LOB support.
– SIZE(nnnnnn)
You might need to increase the SIZE value so that the user region is large
enough for the SQL statement preprocessor. Do not specify SIZE(MAX).
v Include DD statements for the following data sets in the JCL for your compile
step:
– DB2 load library (prefix.SDSNLOAD)
If you are using the DB2 precompiler, you can specify SQL processing options in
one of the following ways:
v With DSNH operands
v With the PARM.PC option of the EXEC JCL statement
v In DB2I panels
If you are using the SQL statement coprocessor, you specify the SQL processing
options in the following way:
| v For C or C++, specify the options as the argument of the SQL compiler option.
v For COBOL, specify the options as the argument of the SQL compiler option.
v For PL/I, specify the options as the argument of the PP(SQL(’option,...’)) compiler
option.
DB2 assigns default values for any SQL processing options for which you do not
explicitly specify a value. Those defaults are the values that are specified in the
APPLICATION PROGRAMMING DEFAULTS installation panels.
Table of SQL processing options: Table 63 shows the options that you can
specify when you use the DB2 precompiler or an SQL statement coprocessor. The
table also includes abbreviations for those options. Not all options apply to all host
languages. For information about which options are ignored for a particular host
language, see Table 63.
Table 63 uses a vertical bar (|) to separate mutually exclusive options, and brackets
([ ]) to indicate that you can sometimes omit the enclosed option.
Table 63. SQL processing options
Option keyword Meaning
1
APOST Recognizes the apostrophe (’) as the string delimiter within host language
statements. The option is not available in all languages; see Table 65 on page 469.
APOST and QUOTE are mutually exclusive options. The default is in the field
STRING DELIMITER on Application Programming Defaults Panel 1 when DB2 is
installed. If STRING DELIMITER is the apostrophe (’), APOST is the default.
APOSTSQL and QUOTESQL are mutually exclusive options. The default is in the
field SQL STRING DELIMITER on Application Programming Defaults Panel 1 when
DB2 is installed. If SQL STRING DELIMITER is the apostrophe (’), APOSTSQL is the
default.
ATTACH(TSO|CAF|RRSAF) Specifies the attachment facility that the application uses to access DB2. TSO, CAF,
and RRSAF applications that load the attachment facility can use this option to
specify the correct attachment facility, instead of coding a dummy DSNHLI entry
point.
| The default setting is the EBCDIC system CCSID as specified on the panel DSNTIPF
| during installation.
COMMA Recognizes the comma (,) as the decimal point indicator in decimal or floating point
literals in the following cases:
v For static SQL statements in COBOL programs
v For dynamic SQL statements, when the value of installation parameter DYNRULS
is NO and the package or plan that contains the SQL statements has
DYNAMICRULES bind, define, or invoke behavior.
COMMA and PERIOD are mutually exclusive options. The default (COMMA or
PERIOD) is chosen under DECIMAL POINT IS on Application Programming Defaults
Panel 1 when DB2 is installed.
CONNECT(2|1) Determines whether to apply type 1 or type 2 CONNECT statement rules.
CT™(2|1) CONNECT(2) Default: Apply rules for the CONNECT (Type 2) statement.
CONNECT(1) Apply rules for the CONNECT (Type 1) statement
If you do not specify the CONNECT option when you precompile a program, the
rules of the CONNECT (Type 2) statement apply. See “Preparing a package for
DRDA access” on page 430 for more information about this option, and Chapter 5 of
DB2 SQL Reference for a comparison of CONNECT (Type 1) and CONNECT (Type
2).
DATE(ISO|USA Specifies that date output should always return in a particular format, regardless of
|EUR|JIS|LOCAL) the format that is specified as the location default. For a description of these formats,
see Chapter 2 of DB2 SQL Reference.
| The default format is determined by the installation defaults of the system where the
| program is bound, not by the installation defaults of the system where the program is
| precompiled.
You cannot use the LOCAL option unless you have a date exit routine.
If the form Dpp.s is specified, pp must be either 15 or 31, and s, which represents
the minimum scale to be used for division, must be a number between 1 and 9.
FLAG(I|W|E|S)1 Suppresses diagnostic messages below the specified severity level (Informational,
Warning, Error, and Severe error for severity codes 0, 4, 8, and 12 respectively).
GRAPHIC and NOGRAPHIC are mutually exclusive options. The default (GRAPHIC
or NOGRAPHIC) is chosen under MIXED DATA on Application Programming Defaults
Panel 1 when DB2 is installed.
HOST1(ASM|C[(FOLD)]| Defines the host language containing the SQL statements.
CPP[(FOLD)]|
| IBMCOB| Use IBMCOB for Enterprise COBOL for z/OS and OS/390. If you specify COBOL or
| PLI|FORTRAN) COB2, a warning message is issued and the precompiler uses IBMCOB.
For C, specify:
v C if you do not want DB2 to fold lowercase letters in SBCS SQL ordinary
identifiers to uppercase
v C(FOLD) if you want DB2 to fold lowercase letters in SBCS SQL ordinary
identifiers to uppercase
If you omit the HOST option, the DB2 precompiler issues a level-4 diagnostic
message and uses the default value for this option.
This option also sets the language-dependent defaults; see Table 65 on page 469.
LEVEL[(aaaa)] Defines the level of a module, where aaaa is any alphanumeric value of up to seven
L characters. This option is not recommended for general use, and the DSNH CLIST
and the DB2I panels do not support it. For more information, see “Setting the
program level” on page 479.
For assembler, C, C++, Fortran, and PL/I, you can omit the suboption (aaaa). The
resulting consistency token is blank. For COBOL, you need to specify the suboption.
MARGINS1(m,n[,c]) Specifies what part of each source record contains host language or SQL
MAR statements; and, for assembler, where column continuations begin. The first option
(m) is the beginning column for statements. The second option (n) is the ending
column for statements. The third option (c) specifies for assembler where
continuations begin. Otherwise, the DB2 precompiler places a continuation indicator
in the column immediately following the ending column. Margin values can range
from 1 to 80.
Default values depend on the HOST option you specify; see Table 65 on page 469.
The DSNH CLIST and the DB2I panels do not support this option. In assembler, the
margin option must agree with the ICTL instruction, if presented in the source.
| NEWFUN(YES|NO) Indicates whether to accept the syntax for DB2 Version 8 functions.
| NEWFUN(NO) causes the precompiler to reject any syntax that DB2 Version 8
| introduces. A successful precompilation produces a DBRM that can be bound with
| any release of DB2, including DB2 Version 8.
| During migration of DB2 Version 8 from an earlier release, the default is NO. At the
| end of enabling-new-function mode, the default changes from NO to YES. If Version
| 8 is a new installation of DB2, the default is YES. For information about
| enabling-new-function mode during installation, see the DB2 Installation Guide.
NOFOR In static SQL, eliminates the need for the FOR UPDATE or FOR UPDATE OF clause
in DECLARE CURSOR statements. When you use NOFOR, your program can make
positioned updates to any columns that the program has DB2 authority to update.
When you do not use NOFOR, if you want to make positioned updates to any
columns that the program has DB2 authority to update, you need to specify FOR
UPDATE with no column list in your DECLARE CURSOR statements. The FOR
UPDATE clause with no column list applies to static or dynamic SQL statements.
Whether you use or do not use NOFOR, you can specify FOR UPDATE OF with a
column list to restrict updates to only the columns named in the clause and specify
the acquisition of update locks.
If the resulting DBRM is very large, you might need extra storage when you specify
NOFOR or use the FOR UPDATE clause with no column list.
NOGRAPHIC Indicates the use of X’0E’ and X’0F’ in a string, but not as control characters.
GRAPHIC and NOGRAPHIC are mutually exclusive options. The default (GRAPHIC
or NOGRAPHIC) is chosen under MIXED DATA on Application Programming Defaults
Panel 1 when DB2 is installed.
NOOPTIONS2 Suppresses the DB2 precompiler options listing.
NOOPTN
NOXREF2 Suppresses the DB2 precompiler cross-reference listing. This is the default.
NOX
ONEPASS1 Processes in one pass, to avoid the additional processing time for making two
ON passes. Declarations must appear before SQL references.
Default values depend on the HOST option specified; see Table 65 on page 469.
| PADNTSTR3 Indicates that output host variables that are NUL-terminated strings are padded with
| blanks with the NUL-terminator placed at the end of the string. This is the default.
COMMA and PERIOD are mutually exclusive options. The default (COMMA or
PERIOD) is chosen under DECIMAL POINT IS on Application Programming Defaults
Panel 1 when DB2 is installed.
QUOTE1 Recognizes the quotation mark (″) as the string delimiter within host language
Q statements. This option applies only to COBOL.
SQL(ALL|DB2) Indicates whether the source contains SQL statements other than those recognized
by DB2 UDB for z/OS.
SQL(DB2), the default, means to interpret SQL statements and check syntax for use
by DB2 UDB for z/OS. SQL(DB2) is recommended when the database server is DB2
UDB for z/OS.
SQLFLAG1(IBM|STD Specifies the standard used to check the syntax of SQL statements. When
[(ssname statements deviate from the standard, the SQL statement processor writes
[,qualifier])]) informational messages (flags) to the output listing. The SQLFLAG option is
independent of other SQL statement processor options, including SQL and STDSQL.
However, if you have a COBOL program and you specify SQLFLAG, then you should
also specify APOSTSQL.
IBM checks SQL statements against the syntax of IBM SQL Version 1. You can also
use SAA® for this option, as in releases before Version 8.
STD checks SQL statements against the syntax of the entry level of the ANSI/ISO
SQL standard of 1992. You can also use 86 for this option, as in releases before
Version 8.
ssname requests semantics checking, using the specified DB2 subsystem name for
catalog access. If you do not specify ssname, the SQL statement processor checks
only the syntax.
qualifier specifies the qualifier used for flagging. If you specify a qualifier, you must
always specify the ssname first. If qualifier is not specified, the default is the
authorization ID of the process that started the SQL statement processor.
| The default format is determined by the installation defaults of the system where the
| program is bound, not by the installation defaults of the system where the program is
| precompiled.
You cannot use the LOCAL option unless you have a time exit routine.
1
TWOPASS Processes in two passes, so that declarations need not precede references. Default
TW values depend on the HOST option specified; see Table 65 on page 469.
If you do not specify a version at precompile time, then an empty string is the default
version identifier. If you specify AUTO, the SQL statement processor uses the
consistency token to generate the version identifier. If the consistency token is a
timestamp, the timestamp is converted into ISO character format and used as the
version identifier. The timestamp used is based on the System/370™ Store Clock
value. For information about using VERSION, see “Identifying a package version” on
page 479.
XREF1 Includes a sorted cross-reference listing of symbols used in SQL statements in the
listing output.
Notes:
1. This option is ignored when the SQL statement coprocessor precompiles the application.
| 2. This option is always in effect when the SQL statement coprocessor precompiles the application.
3. This option applies only for a C or C++ application.
4. You can use STDSQL(86) as in prior releases of DB2. The SQL statement processor treats it the same as
STDSQL(YES).
5. Precompiler options do not affect ODBC behavior.
Defaults for options of the SQL statement processor: Some SQL statement
processor options have defaults based on values specified on the Application
Programming Defaults panels. Table 64 on page 469 shows those options and
defaults:
Some SQL statement processor options have default values based on the host
language. Some options do not apply to some languages. Table 65 shows the
language-dependent options and defaults.
Table 65. Language-dependent DB2 precompiler options and defaults
HOST value Defaults
ASM APOST1, APOSTSQL1, PERIOD1, TWOPASS, MARGINS(1,71,16)
C or CPP APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(1,72)
IBMCOB QUOTE2, QUOTESQL2, PERIOD, ONEPASS1, MARGINS(8,72)1
FORTRAN APOST1, APOSTSQL1, PERIOD1, ONEPASS1, MARGINS(1,72)1
PLI APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(2,72)
Notes:
1. Forced for this language; no alternative allowed.
2. The default is chosen on Application Programming Defaults Panel 1 when DB2 is installed. The IBM-supplied
installation defaults for string delimiters are QUOTE (host language delimiter) and QUOTESQL (SQL escape
character). The installer can replace the IBM-supplied defaults with other defaults. The precompiler options you
specify override any defaults in effect.
If your source program is in COBOL, you must specify a string delimiter that is
the same for the DB2 precompiler, COBOL compiler, and CICS translator. The
defaults for the DB2 precompiler and COBOL compiler are not compatible with
the default for the CICS translator.
If the SQL statements in your source program refer to host variables that a
pointer stored in the CICS TWA addresses, you must make the host variables
addressable to the TWA before you execute those statements. For example, a
COBOL application can issue the following statement to establish
addressability to the TWA:
EXEC CICS ADDRESS
TWA (address-of-twa-area)
END-EXEC
You can append JCL from a job created by the DB2 Program Preparation
panels to the CICS translator JCL to prepare an application program. To run
the prepared program under CICS, you might need to define programs and
transactions to CICS. Your system programmer must make the appropriate
CICS resource or table entries. For information on the required resource
entries, see Part 2 of DB2 Installation Guide and CICS Transaction Server for
z/OS Resource Definition Guide.
If you use an SQL statement coprocessor, you process SQL statements as you
compile your program. You must use JCL procedures when you use the SQL
statement coprocessor.
The purpose of the link edit step is to produce an executable load module. To
enable your application to interface with the DB2 subsystem, you must use a
link-edit procedure that builds a load module that satisfies these requirements:
For a program that uses 31-bit addressing, link-edit the program with the
AMODE=31 and RMODE=ANY options.
CICS
Include the DB2 CICS language interface module (DSNCLI).
You can link DSNCLI with your program in either 24 bit or 31 bit addressing
mode (AMODE=31). If your application program runs in 31-bit addressing
mode, you should link-edit the DSNCLI stub to your application with the
attributes AMODE=31 and RMODE=ANY so that your application can run
above the 16M line. For more information on compiling and link-editing CICS
application programs, see the appropriate CICS manual.
You also need the CICS EXEC interface module appropriate for the
programming language. CICS requires that this module be the first control
section (CSECT) in the final load module.
The size of the executable load module that is produced by the link-edit step varies
depending on the values that the SQL statement processor inserts into the source
code of the program.
For more information about compiling and link-editing, see “Using JCL procedures
to prepare applications” on page 489.
For more information about link-editing attributes, see the appropriate z/OS
manuals. For details on DSNH, see Part 3 of DB2 Command Reference.
Exception
You do not need to bind a DBRM if the only SQL statement in the program is
SET CURRENT PACKAGESET.
Because you do not need a plan or package to execute the SET CURRENT
PACKAGESET statement, the ENCODING bind option does not affect the SET
CURRENT PACKAGESET statement. An application that needs to provide a host
variable value in an encoding scheme other than the system default encoding
scheme must use the DECLARE VARIABLE statement to specify the encoding
scheme of the host variable.
From a DB2 requester, you can run a plan by naming it in the RUN subcommand,
but you cannot run a package directly. You must include the package in a plan and
then run the plan.
To bind a package at a remote DB2 system, you must have all the privileges or
authority there that you would need to bind the package on your local system. To
bind a package at another type of a system, such as DB2 Server for VSE & VM,
you need any privileges that system requires to execute its SQL statements and
use its data objects.
The bind process for a remote package is the same as for a local package, except
that the local communications database must be able to recognize the location
name you use as resolving to a remote location. To bind the DBRM PROGA at the
location PARIS, in the collection GROUP1, use:
BIND PACKAGE(PARIS.GROUP1)
MEMBER(PROGA)
Then, include the remote package in the package list of a local plan, say PLANB,
by using:
BIND PLAN (PLANB)
PKLIST(PARIS.GROUP1.PROGA)
The ENCODING bind option has the following effect on a remote application:
v If you bind a package locally, which is recommended, and you specify the
ENCODING bind option for the local package, the ENCODING bind option for the
local package applies to the remote application.
v If you do not bind a package locally, and you specify the ENCODING bind option
for the plan, the ENCODING bind option for the plan applies to the remote
application.
v If you do not specify an ENCODING bind option for the package or plan at the
local site, the value of APPLICATION ENCODING that was specified on
installation panel DSNTIPF at the local site applies to the remote application.
If you specify the option EXPLAIN(YES) and you do not specify the option
SQLERROR(CONTINUE), then PLAN_TABLE must exist at the location specified
on the BIND or REBIND subcommand. This location could also be the default
location.
If you bind with the option COPY, the COPY privilege must exist locally. DB2
performs authorization checking, reads and updates the catalog, and creates the
package in the directory at the remote site. DB2 reads the catalog records related
to the copied package at the local site. If the local site is installed with time or date
format LOCAL, and a package is created at a remote site using the COPY option,
the COPY option causes DB2 at the remote site to convert values returned from the
remote site in ISO format, unless an SQL statement specifies a different format.
After you bind a package, you can rebind, free, or bind it with the REPLACE option
using either a local or a remote bind.
Turning an existing plan into packages to run remotely: If you have used DB2
before, you might have an existing application that you want to run at a remote
location, using DRDA access. To do that, you need to rebind the DBRMs in the
current plan as packages at the remote location. You also need a new plan that
includes those remote packages in its package list.
When you now run the existing application at your local DB2, using the new
application plan, these things happen:
v You connect immediately to the remote location named in the
CURRENTSERVER option.
Binding DBRMs directly to a plan: A plan can contain DBRMs bound directly to
it. To bind three DBRMs—PROGA, PROGB, and PROGC—directly to plan PLANW,
use:
BIND PLAN(PLANW)
MEMBER(PROGA,PROGB,PROGC)
You can include as many DBRMs in a plan as you wish. However, if you use a
large number of DBRMs in a plan (more than 500, for example), you could have
trouble maintaining the plan. To ease maintenance, you can bind each DBRM
separately as a package, specifying the same collection for all packages bound,
and then bind a plan specifying that collection in the plan’s package list. If the
design of the application prevents this method, see if your system administrator can
increase the size of the EDM pool to be at least 10 times the size of either the
largest database descriptor (DBD) or the plan, whichever is greater.
To bind DBRMs directly to the plan, and also include packages in the package list,
use both MEMBER and PKLIST. The following example includes:
v The DBRMs PROG1 and PROG2
v All the packages in a collection called TEST2
v The packages PROGA and PROGC in the collection GROUP1
MEMBER(PROG1,PROG2)
PKLIST(TEST2.*,GROUP1.PROGA,GROUP1.PROGC)
You must specify MEMBER, PKLIST, or both options. The plan that results consists
of one of the following:
v Programs associated with DBRMs in the MEMBER list only
v Programs associated with packages and collections identified in PKLIST only
v A combination of the specifications on MEMBER and PKLIST
(Usually, the consistency token is in an internal DB2 format. You can override that
token if you want: see “Setting the program level” on page 479.)
Identifying the location: When your program executes an SQL statement, DB2
uses the value in the CURRENT SERVER special register to determine the location
of the necessary package or DBRM. If the current server is your local DB2
subsystem and it does not have a location name, the value is blank.
You can change the value of CURRENT SERVER by using the SQL CONNECT
statement in your program. If you do not use CONNECT, the value of CURRENT
SERVER is the location name of your local DB2 subsystem (or blank, if your DB2
| has no location name).
| Identifying the collection: You can use the special register CURRENT
| PACKAGE PATH or CURRENT PACKAGESET (if CURRENT PACKAGE PATH is
| not set) to specify the collections that are to be used for package resolution. The
| CURRENT PACKAGESET special register contains the name of a single collection,
| and the CURRENT PACKAGE PATH special register contains a list of collection
| names.
| If you do not set these registers, they are blank when your application begins to run
| and remain blank. In this case, DB2 searches the available collections, using
| methods described in “Specifying the package list for the PKLIST option of BIND
| PLAN.”
| However, explicitly specifying the intended collection by using the special registers
| can avoid a potentially costly search through a package list with many qualifying
| entries. In addition, DB2 uses the values in these special registers for applications
| that do not run under a plan. How DB2 uses these special registers is described in
| “Using the special registers” on page 477.
| When you call a stored procedure, the special register CURRENT PACKAGESET
| contains the value that you specified for the COLLID parameter when you defined
| the stored procedure. When the stored procedure returns control to the calling
| program, DB2 restores this register to the value that it contained before the call.
Specifying the package list for the PKLIST option of BIND PLAN: The order in
which you specify packages in a package list can affect run-time performance.
Searching for the specific package involves searching the DB2 directory, which can
be costly. When you use collection-id.* with the PKLIST keyword, you should
specify first the collections in which DB2 is most likely to find a package.
Then you execute program PROG1. DB2 does the following package search:
1. Checks to see if program PROG1 is bound as part of the plan
2. Searches for COLL1.PROG1.timestamp
If the order of search is not important: In many cases, DB2’s order of search is
not important to you and does not affect performance. For an application that runs
only at your local DB2, you can name every package differently and include them
all in the same collection. The package list on your BIND PLAN subcommand can
read:
PKLIST (collection.*)
You can add packages to the collection even after binding the plan. DB2 lets you
bind packages having the same package name into the same collection only if their
version IDs are different.
If your application uses DRDA access, you must bind some packages at remote
locations. Use the same collection name at each location, and identify your package
list as:
PKLIST (*.collection.*)
If you use an asterisk for part of a name in a package list, DB2 checks the
authorization for the package to which the name resolves at run time. To avoid the
checking at run time in the preceding example, you can grant EXECUTE authority
| for the entire collection to the owner of the plan before you bind the plan.
| Using the special registers: If you set the special register CURRENT PACKAGE
| PATH or CURRENT PACKAGESET, DB2 skips the check for programs that are part
| of a plan and uses the values in these registers for package resolution.
| If you set CURRENT PACKAGESET and not CURRENT PACKAGE PATH, DB2
| uses the value of CURRENT PACKAGESET as the collection for package
| resolution. For example, if CURRENT PACKAGESET contains COLL5, then DB2
| uses COLL5.PROG1.timestamp for the package search.
| When CURRENT PACKAGE PATH is set, the server that receives the request
| ignores the collection that is specified by the request and instead uses the value of
| CURRENT PACKAGE PATH at the server to resolve the package. Specifying a
| collection list with the CURRENT PACKAGE PATH special register can avoid the
| need to issue multiple SET CURRENT PACKAGESET statements to switch
| collections for the package search.
|| SET CURRENT PACKAGE PATH The collections in PACKAGE PATH determine which
|| SELECT ... FROM T1 ... package is invoked.
| Notes:
| 1. When CURRENT PACKAGE PATH is set at the requester (and not at the remote server),
| DB2 passes one collection at a time from the list of collections to the remote server until
| a package is found or until the end of the list. Each time a package is not found at the
| server, DB2 returns an error to the requester. The requester then sends the next
| collection in the list to the remote server.
|
You can do that with many versions of the program, without having to rebind the
application plan. Neither do you have to rename the plan or change any RUN
subcommands that use it.
Table 68 shows the dynamic SQL attribute values for each type of dynamic SQL
behavior.
Table 68. Definitions of dynamic SQL statement behaviors
Setting for dynamic SQL attributes
Dynamic SQL attribute Bind behavior Run behavior Define behavior Invoke behavior
Authorization ID Plan or package Current SQLID User-defined Authorization ID of
owner function or stored invoker1
procedure owner
Default qualifier for Bind OWNER or Current SQLID User-defined Authorization ID of
unqualified objects QUALIFIER value function or stored invoker
procedure owner
CURRENT SQLID2 Not applicable Applies Not applicable Not applicable
Source for application Determined by Install panel Determined by Determined by
| programming options DSNHDECP DSNTIP4 DSNHDECP DSNHDECP
parameter parameter parameter
DYNRULS3 DYNRULS3 DYNRULS3
Can execute GRANT, No Yes No No
REVOKE, CREATE,
ALTER, DROP, RENAME?
Notes:
1. If the invoker is the primary authorization ID of the process or the CURRENT SQLID value, secondary
authorization IDs will also be checked if they are needed for the required authorization. Otherwise, only one ID, the
ID of the invoker, is checked for the required authorization.
2. DB2 uses the value of CURRENT SQLID as the authorization ID for dynamic SQL statements only for plans and
packages that have run behavior. For the other dynamic SQL behaviors, DB2 uses the authorization ID that is
associated with each dynamic SQL behavior, as shown in this table.
The value to which CURRENT SQLID is initialized is independent of the dynamic SQL behavior. For stand-alone
programs, CURRENT SQLID is initialized to the primary authorization ID. See Table 41 on page 324 and Table 77
on page 586 for information about initialization of CURRENT SQLID for user-defined functions and stored
procedures.
You can execute the SET CURRENT SQLID statement to change the value of CURRENT SQLID for packages
with any dynamic SQL behavior, but DB2 uses the CURRENT SQLID value only for plans and packages with run
behavior.
3. The value of DSNHDECP parameter DYNRULS, which you specify in field USE FOR DYNAMICRULES in
| installation panel DSNTIP4, determines whether DB2 uses the SQL statement processing options or the
application programming defaults for dynamic SQL statements. See “Options for SQL statement processing” on
page 462 for more information.
Determining the optimal authorization cache size: When DB2 determines that
you have the EXECUTE privilege on a plan, package collection, stored procedure,
or user-defined function, DB2 can cache your authorization ID. When you run the
plan, package, stored procedure, or user-defined function, DB2 can check your
authorization more quickly.
Determining the authorization cache size for plans: The CACHESIZE option
(optional) allows you to specify the size of the cache to acquire for the plan. DB2
uses this cache for caching the authorization IDs of those users that are running a
| plan. An authorization ID can take up to 128 bytes of storage. DB2 uses the
CACHESIZE value to determine the amount of storage to acquire for the
authorization cache. DB2 acquires storage from the EDM storage pool. The default
CACHESIZE value is 1024 or the size set at installation time.
The size of the cache you specify depends on the number of individual
authorization IDs actively using the plan. Required overhead takes 32 bytes, and
each authorization ID takes up 8 bytes of storage. The minimum cache size is 256
bytes (enough for 28 entries and overhead information) and the maximum is 4096
bytes (enough for 508 entries and overhead information). You should specify size in
multiples of 256 bytes; otherwise, the specified value rounds up to the next highest
value that is a multiple of 256.
If you run the plan infrequently, or if authority to run the plan is granted to PUBLIC,
you might want to turn off caching for the plan so that DB2 does not use
unnecessary storage. To do this, specify a value of 0 for the CACHESIZE option.
Any plan that you run repeatedly is a good candidate for tuning using the
CACHESIZE option. Also, if you have a plan that a large number of users run
concurrently, you might want to use a larger CACHESIZE.
See DB2 Installation Guide for more information about setting the size of the
package authorization cache.
See DB2 Installation Guide for more information about setting the size of the routine
authorization cache.
Specifying the SQL rules: Not only does SQLRULES specify the rules under
which a type 2 CONNECT statement executes, but it also sets the initial value of
the special register CURRENT RULES when the database server is the local DB2.
When the server is not the local DB2, the initial value of CURRENT RULES is DB2.
After binding a plan, you can change the value in CURRENT RULES in an
application program using the statement SET CURRENT RULES.
CURRENT RULES determines the SQL rules, DB2 or SQL standard, that apply to
SQL behavior at run time. For example, the value in CURRENT RULES affects the
behavior of defining check constraints using the statement ALTER TABLE on a
populated table:
v If CURRENT RULES has a value of STD and no existing rows in the table
violate the check constraint, DB2 adds the constraint to the table definition.
Otherwise, an error occurs and DB2 does not add the check constraint to the
table definition.
If the table contains data and is already in a check pending status, the ALTER
TABLE statement fails.
v If CURRENT RULES has a value of DB2, DB2 adds the constraint to the table
definition, defers the enforcing of the check constraints, and places the table
space or partition in check pending status.
You can use the statement SET CURRENT RULES to control the action that the
statement ALTER TABLE takes. Assuming that the value of CURRENT RULES is
initially STD, the following SQL statements change the SQL rules to DB2, add a
check constraint, defer validation of that constraint and place the table in check
pending status, and restore the rules to STD.
EXEC SQL
SET CURRENT RULES = ’DB2’;
EXEC SQL
ALTER TABLE DSN8810.EMP
ADD CONSTRAINT C1 CHECK (BONUS <= 1000.0);
EXEC SQL
SET CURRENT RULES = ’STD’;
See “Using check constraints” on page 243 for information about check constraints.
You can also use CURRENT RULES as the argument of a search-condition, for
example:
SELECT * FROM SAMPTBL WHERE COL1 = CURRENT RULES;
CICS
You can use packages and dynamic plan selection together, but when you
dynamically switch plans, the following conditions must exist:
v All special registers, including CURRENT PACKAGESET, must contain their
initial values.
v The value in the CURRENT DEGREE special register cannot have changed
during the current transaction.
The benefit of using dynamic plan selection and packages together is that you
can convert individual programs in an application containing many programs
and plans, one at a time, to use a combination of plans and packages. This
reduces the number of plans per application, and having fewer plans reduces
the effort needed to maintain the dynamic plan exit.
You could create packages and plans using the following bind statements:
BIND PACKAGE(PKGB) MEMBER(PKGB)
BIND PLAN(MAIN) MEMBER(MAIN,PLANA) PKLIST(*.PKGB.*)
BIND PLAN(PLANC) MEMBER(PLANC)
The following scenario illustrates thread association for a task that runs
program MAIN:
Sequence of SQL Statements
Events
1. EXEC CICS START TRANSID(MAIN)
TRANSID(MAIN) executes program MAIN.
2. EXEC SQL SELECT...
Program MAIN issues an SQL SELECT statement. The default
dynamic plan exit selects plan MAIN.
3. EXEC CICS LINK PROGRAM(PROGA)
You can use the DSN command processor implicitly during program development
for functions such as:
v Using the declarations generator (DCLGEN)
v Running the BIND, REBIND, and FREE subcommands on DB2 plans and
packages for your program
The DSN command processor runs with the TSO terminal monitor program (TMP).
Because the TMP runs in either foreground or background, DSN applications run
interactively or as batch jobs.
The DSN command processor can provide these services to a program that runs
under it:
v Automatic connection to DB2
v Attention key support
v Translation of return codes into error messages
Limitations of the DSN command processor: When using DSN services, your
application runs under the control of DSN. Because TSO executes the ATTACH
macro to start DSN, and DSN executes the ATTACH macro to start a part of itself,
your application gains control two task levels below that of TSO.
If these limitations are too severe, consider having your application use the call
attachment facility or Resource Recovery Services attachment facility. For more
information about these attachment facilities, see Chapter 30, “Programming for the
call attachment facility (CAF),” on page 799 and Chapter 31, “Programming for the
Resource Recovery Services attachment facility (RRSAF),” on page 831.
DSN return code processing: At the end of a DSN session, register 15 contains
the highest value placed there by any DSN subcommand used in the session or by
any program run by the RUN subcommand. Your run-time environment might format
that value as a return code. The value does not, however, originate in DSN.
The following example shows how to start a TSO foreground application. The name
of the application is SAMPPGM, and ssid is the system ID:
TSO Prompt: READY
Enter: DSN SYSTEM(ssid)
DSN Prompt: DSN
Enter: RUN PROGRAM(SAMPPGM) -
PLAN(SAMPLAN) -
LIB(SAMPPROJ.SAMPLIB) -
. PARMS(’/D01 D02 D03’)
.
.
(Here the program runs and might prompt you for input)
DSN Prompt: DSN
Enter: END
TSO Prompt: READY
This sequence also works in ISPF option 6. You can package this sequence in a
CLIST. DB2 does not support access to multiple DB2 subsystems from a single
address space.
The slash (/) indicates that you are passing parameters. For some languages, you
pass parameters and run-time options in the form PARMS(’parameters/run-time-
options). In those environments, an example of the PARMS keyword might be:
PARMS (’D01, D02, D03/’)
Check your host language publications for the correct form of the PARMS option.
Figure 148. JCL for running a DB2 application under the TSO terminal monitor program
v The JOB option identifies this as a job card. The USER option specifies the DB2
authorization ID of the user.
v The EXEC statement calls the TSO Terminal Monitor Program (TMP).
v The STEPLIB statement specifies the library in which the DSN Command
Processor load modules and the default application programming defaults
module, DSNHDECP, reside. It can also reference the libraries in which user
applications, exit routines, and the customized DSNHDECP module reside. The
customized DSNHDECP module is created during installation. If you do not
specify a library containing the customized DSNHDECP, DB2 uses the default
DSNHDECP.
v Subsequent DD statements define additional files needed by your program.
v The DSN command connects the application to a particular DB2 subsystem.
v The RUN subcommand specifies the name of the application program to run.
v The PLAN keyword specifies plan name.
v The LIB keyword specifies the library the application should access.
v The PARMS keyword passes parameters to the run-time processor and the
application program.
v END ends the DSN command processor.
Usage notes:
v Keep DSN job steps short.
The following CLIST calls a DB2 application program named MYPROG. The DB2
subsystem name or group attachment name should replace ssid.
IMS
To run a message-driven program
First, be sure you can respond to the program’s interactive requests for data
and that you can recognize the expected results. Then, enter the transaction
code associated with the program. Users of the transaction code must be
authorized to run the program.
First, ensure that the corresponding entries in the SNT and RACF* control
areas allow run authorization for your application. The system administrator is
responsible for these functions; see Part 3 (Volume 1) of DB2 Administration
Guide for more information.
Also, be sure to define to CICS the transaction code assigned to your program
and the program itself.
Issue the NEWCOPY command if CICS has not been reinitialized since the
program was last bound and compiled.
In a batch environment, you might use statements like these to invoke procedure
REXXPROG:
//RUNREXX EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSEXEC DD DISP=SHR,DSN=SYSADM.REXX.EXEC
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%REXXPROG parameters
The SYSEXEC data set contains your REXX application, and the SYSTSIN data set
contains the command that you use to invoke the application.
This section describes how to use JCL procedures to prepare a program. For
information about using the DSNH CLIST, the TSO DSN command processor, or
JCL procedures added to your SYS1.PROCLIB, see Part 3 of DB2 Command
Reference.
Notes:
1. You must customize these programs to invoke the procedures listed in this table. For
information about how to do that, see Part 2 of DB2 Installation Guide.
2. This procedure demonstrates how you can prepare an object-oriented program that
consists of two data sets or members, both of which contain SQL.
If you use the PL/I macro processor, you must not use the PL/I *PROCESS
statement in the source to pass options to the PL/I compiler. You can specify the
needed options on the PARM.PLI= parameter of the EXEC statement in the
DSNHPLI procedure.
member must be DSNELI, except for FORTRAN, in which case member must
be DSNHFT.
CICS
//LKED.SYSIN DD *
INCLUDE SYSLIB(DSNCLI)
/*
For more information on required CICS modules, see “Step 2: Compile (or
assemble) and link-edit the application” on page 471.
To call the precompiler, specify DSNHPC as the entry point name. You can pass
three address options to the precompiler; the following sections describe their
formats. The options are addresses of:
v A precompiler option list
v A list of alternate ddnames for the data sets that the precompiler uses
v A page number to use for the first page of the compiler listing on SYSPRINT.
The precompiler adds 1 to the last page number used in the precompiler listing and
puts this value into the page-number field before returning control to the calling
routine. Thus, if you call the precompiler again, page numbering is continuous.
CICS
Instead of using the DB2 Program Preparation panels to prepare your CICS program, you can tailor
CICS-supplied JCL procedures to do that. To tailor a CICS procedure, you need to add some steps
and change some DD statements. Make changes as needed to do the following:
v Process the program with the DB2 precompiler.
v Bind the application plan. You can do this any time after you precompile the program. You can
bind the program either online by the DB2I panels or as a batch step in this or another z/OS job.
v Include a DD statement in the linkage editor step to access the DB2 load library.
v Be sure the linkage editor control statements contain an INCLUDE statement for the DB2
language interface module.
The following example illustrates the necessary changes. This example assumes the use of a VS
COBOL II or COBOL/370 program. For any other programming language, change the CICS
procedure name and the DB2 precompiler options.
//TESTC01 JOB
//*
//*********************************************************
//* DB2 PRECOMPILE THE COBOL PROGRAM
//*********************************************************
(1) //PC EXEC PGM=DSNHPC,
(1) // PARM=’HOST(COB2),XREF,SOURCE,FLAG(I),APOST’
(1) //STEPLIB DD DISP=SHR,DSN=prefix.SDSNEXIT
(1) // DD DISP=SHR,DSN=prefix.SDSNLOAD
(1) //DBRMLIB DD DISP=OLD,DSN=USER.DBRMLIB.DATA(TESTC01)
(1) //SYSCIN DD DSN=&&DSNHOUT,DISP=(MOD,PASS),UNIT=SYSDA,
(1) // SPACE=(800,(500,500))
(1) //SYSLIB DD DISP=SHR,DSN=USER.SRCLIB.DATA
(1) //SYSPRINT DD SYSOUT=*
(1) //SYSTERM DD SYSOUT=*
(1) //SYSUDUMP DD SYSOUT=*
(1) //SYSUT1 DD SPACE=(800,(500,500),,,ROUND),UNIT=SYSDA
(1) //SYSUT2 DD SPACE=(800,(500,500),,,ROUND),UNIT=SYSDA
(1) //SYSIN DD DISP=SHR,DSN=USER.SRCLIB.DATA(TESTC01)
(1) //*
For more information about the procedure DFHEITVL, other CICS procedures, or CICS requirements
for application programs, please see the appropriate CICS manual.
If you are preparing a particularly large or complex application, you can use one of
the last two techniques mentioned previously. For example, if your program requires
four of your own link-edit include libraries, you cannot prepare the program with
DB2I, because DB2I limits the number of include libraries to three, plus language,
IMS or CICS, and DB2 libraries. Therefore, you would need another preparation
method. Programs using the call attachment facility can use either of the last two
techniques mentioned previously. Be careful to use the correct language
interface.
You must precompile the contents of each data set or member separately, but the
prelinker must receive all of the compiler output together.
This section describes the panels that are associated with DB2 Program
Preparation.
Important: If your C++ or IBM COBOL for z/OS program satisfies both of the
following conditions, you must use a JCL procedure to prepare it:
v The program consists of more than one data set or member.
v More than one data set or member contains SQL statements.
See “Using JCL to prepare a program with object-oriented extensions” for more
information.
DB2I help
The online help facility enables you to read information about how to use DB2I in
| an online DB2 book from a DB2I panel. It contains detailed information about the
| fields of each of the DB2 Program Preparation panels.
For instructions on setting up DB2 online help, see the discussion of setting up DB2
online help in Part 2 of DB2 Installation Guide.
If your site makes use of CD-ROM updates, you can make the updated books
accessible from DB2I. Select Option 10 on the DB2I Defaults panel and enter the
new book data set names. You must have write access to prefix.SDSNCLST to
perform this function.
| To access DB2I HELP, press the PF key that is associated with the HELP function.
| The default PF key for HELP is PF 1; however, your location might have assigned a
| different PF key for HELP.
Figure 149. Initiating program preparation through DB2I. Specify Program Preparation on the
DB2I Primary Option Menu.
| The DB2I help system describes each of the fields that are listed on the DB2I
| Primary Options Menu.
| Table 71 describes each of the panels you will need to use to prepare an
| application. The DB2I help contains detailed descriptions of each panel.
| Table 71. DB2I panels used for program preparation
| Panel name Panel description
| DB2 Program Preparation The DB2 Program Preparation panel lets you choose specific
| program preparation functions to perform. For the functions you
| choose, you can also display the associated panels to specify
| options for performing those functions.
| This panel also lets you change the DB2I default values and
| perform other precompile and prelink functions.
| DB2I Defaults Panel 1 DB2I Defaults Panel 1 lets you change many of the system
| defaults that are set at DB2 installation time.
| DB2I Defaults Panel 2 DB2I Defaults Panel 2 lets you change your default job
| statement and set additional COBOL options.
| Precompile The Precompile panel lets you specify values for precompile
| functions.
| You can reach this panel directly from the DB2I Primary Option
| Menu, or from the DB2 Program Preparation panel. If you
| reach this panel from the Program Preparation panel, many of
| the fields contain values from the Primary and Precompile
| panels.
| You can reach this panel directly from the DB2I Primary Option
| Menu, or from the DB2 Program Preparation panel. If you
| reach this panel from the DB2 Program Preparation panel,
| many of the fields contain values from the Primary and
| Precompile panels.
| Bind Plan The Bind Plan panel lets you change options when you bind an
| application plan.
| You can reach this panel directly from the DB2I Primary Option
| Menu, or as a part of the program preparation process. This
| panel also follows the Bind Package panels.
| Defaults for Bind or Rebind These panels let you change the defaults for BIND or REBIND
| Package or Plan panels PACKAGE or PLAN.
| System Connection Types The System Connection Types panel lets you specify a system
| panel connection type.
| It also lets you do the PL/I MACRO PHASE for programs that
| require this option.
| For TSO programs, the panel also lets you run programs.
|
|
CICS
Before you run an application, ensure that the corresponding entries in the
SNT and RACF control areas authorize your application to run. The system
administrator is responsible for these functions; see Part 3 (Volume 1) of DB2
Administration Guide for more information on the functions.
In addition, ensure that the program and its transaction code are defined in the
CICS CSD.
If your location has a separate DB2 system for testing, you can create the test
tables and views on the test system, then test your program thoroughly on that
system. This chapter assumes that you do all testing on a separate system, and
that the person who created the test tables and views has an authorization ID of
TEST. The table names are TEST.EMP, TEST.PROJ and TEST.DEPT.
2. Determine the test tables and views you need to test your application.
Create a test table on your list when either:
v The application modifies data in the table
v You need to create a view based on a test table because your application
modifies the view’s data.
To continue the example, create these test tables:
v TEST.EMP, with the following format:
DEPTNO MGRNO
. .
. .
. .
Because the application does not change any data in the DSN8810.DEPT table,
you can base the view on the table itself (rather than on a test table). However,
a safer approach is to have a complete set of test tables and to test the
program thoroughly using only test data.
Obtaining authorization
Before you can create a table, you need to be authorized to create tables and to
use the table space in which the table is to reside. You must also have authority to
bind and run programs you want to test. Your DBA can grant you the authorization
needed to create and access tables and to bind and run programs.
To create a view, you must have authorization for each table and view on which you
base the view. You then have the same privileges over the view that you have over
the tables and views on which you based the view. Before trying the examples,
have your DBA grant you the privileges to create new tables and views and to
access existing tables. Obtain the names of tables and views you are authorized to
access (as well as the privileges you have for each table) from your DBA. See
Chapter 2, “Working with tables and modifying data,” on page 19 for more
information about creating tables and views.
For details about each CREATE statement, see DB2 SQL Reference.
SQL statements executed under SPUFI operate on actual tables (in this case, the
tables you have created for testing). Consequently, before you access DB2 data:
v Make sure that all tables and views your SQL statements refer to exist
v If the tables or views do not exist, create them (or have your database
administrator create them). You can use SPUFI to issue the CREATE statements
used to create the tables and views you need for testing.
For more information about how to use SPUFI, see Chapter 5, “Executing SQL from
your terminal using SPUFI,” on page 59.
When your program encounters an error that does not result in an abend, it can
pass all the required error information to a standard error routine. Online programs
might also send an error message to the terminal.
For more information about the TEST command, see z/OS TSO/E Command
Reference.
ISPF Dialog Test is another option to help you in the task of debugging.
When your program encounters an error, it can pass all the required error
information to a standard error routine. Online programs can also send an error
message to the originating logical terminal.
An interactive program also can send a message to the master terminal operator
giving information about the program’s termination. To do that, the program places
the logical terminal name of the master terminal in an express PCB and issues one
or more ISRT calls.
Some sites run a BMP at the end of the day to list all the errors that occurred
during the day. If your location does this, you can send a message using an
express PCB that has its destination set for that BMP.
Batch Terminal Simulator (BTS): The Batch Terminal Simulator (BTS) allows you
to test IMS application programs. BTS traces application program DL/I calls and
SQL statements, and simulates data communication functions. It can make a TSO
terminal appear as an IMS terminal to the terminal operator, allowing the end user
to interact with the application as though it were online. The user can use any
Using CICS facilities, you can have a printed error record; you can also print the
SQLCA (and SQLDA) contents.
For more details about each of these topics, see CICS Transaction Server for z/OS
Application Programming Reference.
EDF intercepts the running application program at various points and displays
helpful information about the statement type, input and output variables, and any
error conditions after the statement executes. It also displays any screens that the
application program sends, making it possible to converse with the application
program during testing just as a user would on a production system.
EDF displays essential information before and after an SQL statement, while the
task is in EDF mode. This can be a significant aid in debugging CICS transaction
programs containing SQL statements. The SQL information that EDF displays is
helpful for debugging programs and for error analysis after an SQL error or warning.
Using this facility reduces the amount of work you need to do to write special error
handlers.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : UNDEFINED
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
SQL statements containing input host variables: The IVAR (input host variables)
section and its attendant fields only appear when the executing statement contains
input host variables.
The host variables section includes the variables from predicates, the values used
for inserting or updating, and the text of dynamic SQL statements being prepared.
The address of the input variable is AT ’nnnnnnnn’.
EDF after execution: Figure 151 on page 507 shows an example of the first EDF
screen displayed after the executing an SQL statement. The names of the key
information fields on this panel are in boldface.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : END EDF SESSION
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
Plus signs (+) on the left of the screen indicate that you can see additional EDF
output by using PF keys to scroll the screen forward or back.
The OVAR (output host variables) section and its attendant fields only appear when
the executing statement returns output host variables.
Figure 152 on page 508 contains the rest of the EDF output for our example.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : END EDF SESSION
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
The attachment facility automatically displays SQL information while in the EDF
mode. (You can start EDF as outlined in the appropriate CICS application
programmer’s reference manual.) If this is not the case, contact your installer and
see Part 2 of DB2 Installation Guide.
IMS
– If you are using IMS, have you included the DL/I option statement in the
correct format?
– Have you included the region size parameter in the EXEC statement? Does it
specify a region size large enough for the storage required for the DB2
interface, the TSO, IMS, or CICS system, and your program?
– Have you included the names of all data sets (DB2 and non-DB2) that the
program requires?
v Your program.
You can also use dumps to help localize problems in your program. For example,
one of the more common error situations occurs when your program is running
and you receive a message that it abended. In this instance, your test procedure
might be to capture a TSO dump. To do so, you must allocate a SYSUDUMP or
SYSABEND dump data set before calling DB2. When you press the ENTER key
(after the error message and READY message), the system requests a dump.
You then need to FREE the dump data set.
The SYSTERM output provides a brief summary of the results from the precompiler,
all error messages that the precompiler generated, and the statement in error, when
possible. Sometimes, the error messages by themselves are not enough. In such
cases, you can use the line number provided in each error message to locate the
failing source statement.
DSNH104I E DSNHPARS LINE 32 COL 26 ILLEGAL SYMBOL "X" VALID SYMBOLS ARE:, FROM1
SELECT VALUE INTO HIPPO X;2
When you use the Program Preparation panels to prepare and run your program,
DB2 allocates SYSPRINT according to TERM option you specify (on line 12 of the
PROGRAM PREPARATION: COMPILE, PRELINK, LINK, AND RUN panel). As an
alternative, when you use the DSNH command procedure (CLIST), you can specify
PRINT(TERM) to obtain SYSPRINT output at your terminal, or you can specify
PRINT(qualifier) to place the SYSPRINT output into a data set named
authorizationid.qualifier.PCLIST. Assuming that you do not specify PRINT as
LEAVE, NONE, or TERM, DB2 issues a message when the precompiler finishes,
telling you where to find your precompiler listings. This helps you locate your
diagnostics quickly and easily.
The SYSPRINT output can provide information about your precompiled source
module if you specify the options SOURCE and XREF when you start the DB2
precompiler.
...
SOURCE STATISTICS
SOURCE LINES READ: 15231
NUMBER OF SYMBOLS: 1282
SYMBOL TABLE BYTES EXCLUDING ATTRIBUTES: 64323
The restart capabilities for DB2 and IMS databases, as well as for sequential data
sets accessed through GSAM, are available through the IMS Checkpoint and
Restart facility.
DB2 allows access to both DB2 and DL/I data through the use of the following DB2
and IMS facilities:
v IMS synchronization calls, which commit and abend units of recovery
v The DB2 IMS attachment facility, which handles the two-phase commit protocol
and allows both systems to synchronize a unit of recovery during a restart after a
failure
v The IMS log, used to record the instant of commit.
In a data sharing environment, DL/I batch supports group attachment. You can
specify a group attachment name instead of a subsystem name in the SSN
Authorization
When the batch application tries to run the first SQL statement, DB2 checks
whether the authorization ID has the EXECUTE privilege for the plan. DB2 uses the
same ID for later authorization checks and also identifies records from the
accounting and performance traces.
The primary authorization ID is the value of the USER parameter on the job
statement, if that is available. It is the TSO logon name if the job is submitted.
Otherwise, it is the IMS PSB name. In that case, however, the ID must not begin
with the string “SYSADM” because this string causes the job to abend. The batch
job is rejected if you try to change the authorization ID in an exit routine.
Address spaces
A DL/I batch region is independent of both the IMS control region and the CICS
address space. The DL/I batch region loads the DL/I code into the application
region along with the application program.
Commits
Commit IMS batch applications frequently so that you do not use resources for an
extended time. If you need coordinated commits for recovery, see Part 4 (Volume 1)
of DB2 Administration Guide.
Checkpoint calls
Write your program with SQL statements and DL/I calls, and use checkpoint calls.
All checkpoints issued by a batch application program must be unique. The
frequency of checkpoints depends on the application design. At a checkpoint, DL/I
positioning is lost, DB2 cursors are closed (with the possible exception of cursors
defined as WITH HOLD), commit duration locks are freed (again with some
exceptions), and database changes are considered permanent to both IMS and
DB2.
You can also have IMS dynamically back out the updates within the same job. You
must specify the BKO parameter as ’Y’ and allocate the IMS log to DASD.
You could have a problem if the system fails after the program terminates, but
before the job step ends. If you do not have a checkpoint call before the program
ends, DB2 commits the unit of work without involving IMS. If the system fails before
DL/I commits the data, then the DB2 data is out of synchronization with the DL/I
changes. If the system fails during DB2 commit processing, the DB2 data could be
indoubt.
If you do not use an XRST call, then DB2 assumes that any checkpoint call issued
is a basic checkpoint.
You can specify values for the following parameters using a DDITV02 data set or a
subsystem member:
SSN,LIT,ESMT,RTT,REO,CRC
You can specify values for the following parameters only in a DDITV02 data set:
CONNECTION_NAME,PLAN,PROG
If you use the DDITV02 data set and specify a subsystem member, the values in
the DDITV02 DD statement override the values in the specified subsystem member.
If you provide neither, DB2 abends the application program with system abend code
X'04E' and a unique reason code in register 15.
DDITV02 is the DD name for a data set that has DCB options of LRECL=80 and
RECFM=F or FB.
A subsystem member is a member in the IMS procedure library. Its name is derived
by concatenating the value of the SSM parameter to the value of the IMSID
parameter. You specify the SSM parameter and the IMSID parameter when you
invoke the DLIBATCH procedure, which starts the DL/I batch processing
environment.
If the application program uses the XRST call, and if coordinated recovery
is required on the XRST call, then REO is ignored. In that case, the
application program terminates abnormally if DB2 is not operational.
If a batch update job fails, you must use a separate job to restart the batch
job. The connection name used in the restart job must be the same as the
name used in the batch job that failed. Alternatively, if the default
connection name is used, the restart job must have the same job name as
the batch update job that failed.
You might want to save and print the data set, as the information is useful for
diagnostic purposes. You can use the IMS module, DFSERA10, to print the
variable-length data set records in both hexadecimal and character format.
Precompiling
When you add SQL statements to an application program, you must precompile the
application program and bind the resulting DBRM into a plan or package, as
described in Chapter 21, “Preparing an application program to run,” on page 453.
Binding
The owner of the plan or package must have all the privileges required to execute
the SQL statements embedded in it. Before a batch program can issue SQL
statements, a DB2 plan must exist.
You can specify the plan name to DB2 in one of the following ways:
v In the DDITV02 input data set.
v In subsystem member specification.
v By default; the plan name is then the application load module name specified in
DDITV02.
DB2 passes the plan name to the IMS attach package. If you do not specify a plan
name in DDITV02, and a resource translation table (RTT) does not exist or the
name is not in the RTT, then DB2 uses the passed name as the plan name. If the
name exists in the RTT, then the name translates to the plan specified for the RTT.
Link-editing
DB2 has language interface routines for each unique supported environment. DB2
requires the IMS language interface routine for DL/I batch. It is also necessary to
have DFSLI000 link-edited with the application program.
You cannot restart A BMP application program in a DB2 DL/I batch environment.
The symbolic checkpoint records are not accessed, causing an IMS user abend
U0102.
To restart a batch job that terminated abnormally or prematurely, find the checkpoint
ID for the job on the z/OS system log or from the SYSOUT listing of the failing job.
Before you restart the job step, place the checkpoint ID in the CKPTID=value option
of the DLIBATCH procedure, then submit the job. If the default connection name is
used (that is, you did not specify the connection name option in the DDITV02 input
data set), the job name of the restart job must be the same as the failing job. Refer
to the following skeleton example, in which the last checkpoint ID value was
IVP80002:
//ISOCS04 JOB 3000,OJALA,MSGLEVEL=(1,1),NOTIFY=OJALA,
// MSGCLASS=T,CLASS=A
//* ******************************************************************
//*
//* THE FOLLOWING STEP RESTARTS COBOL PROGRAM IVP8CP22, WHICH UPDATES
//* BOTH DB2 AND DL/I DATABASES, FROM CKPTID=IVP80002.
//*
//* ******************************************************************
//RSTRT EXEC DLIBATCH,DBRC=Y,COND=EVEN,LOGT=SYSDA,
// MBR=DSNMTV01,PSB=IVP8CA,BKO=Y,IRLM=N,CKPTID=IVP80002
//G.STEPLIB DD
// DD
// DD DSN=prefix.SDSNLOAD,DISP=SHR
// DD DSN=prefix.RUNLIB.LOAD,DISP=SHR
// DD DSN=SYS1.COB2LIB,DISP=SHR
// DD DSN=IMS.PGMLIB,DISP=SHR
//* other program libraries
//* G.IEFRDER data set required
//G.STEPCAT DD DSN=IMSCAT,DISP=SHR
//* G.IMSLOGR data set required
//G.DDOTV02 DD DSN=&TEMP2,DISP=(NEW,PASS,DELETE),
// SPACE=(TRK,(1,1),RLSE),UNIT=SYSDA,
// DCB=(RECFM=VB,BLKSIZE=4096,LRECL=4092)
//G.DDITV02 DD *
During the commit process the application program checkpoint ID is passed to DB2.
If a failure occurs during the commit process, creating an indoubt work unit, DB2
remembers the checkpoint ID. You can use the following techniques to find the last
checkpoint ID:
v Look at the SYSOUT listing for the job step to find message DFS0540I, which
contains the checkpoint IDs issued. Use the last checkpoint ID listed.
v Look at the z/OS console log to find message(s) DFS0540I containing the
checkpoint ID issued for this batch program. Use the last checkpoint ID listed.
v Submit the IMS Batch Backout utility to back out the DL/I databases to the last
(default) checkpoint ID. When the batch backout finishes, message DFS395I
provides the last valid IMS checkpoint ID. Use this checkpoint ID on restart.
v When restarting DB2, the operator can issue the command -DISPLAY
THREAD(*) TYPE(INDOUBT) to obtain a possible indoubt unit of work
(connection name and checkpoint ID). If you restarted the application program
from this checkpoint ID, it could work because the checkpoint is recorded on the
IMS log; however, it could fail with an IMS user abend U102 because IMS did not
finish logging the information before the failure. In that case, restart the
application program from the previous checkpoint ID.
DB2 performs one of two actions automatically when restarted, if the failure occurs
outside the indoubt period: it either backs out the work unit to the prior checkpoint,
or it commits the data without any assistance. If the operator then issues the
following command, no work unit information is displayed:
-DISPLAY THREAD(*) TYPE(INDOUBT)
Chapter 30. Programming for the call attachment facility (CAF) . . . . . 799
Call attachment facility capabilities and restrictions . . . . . . . . . . . 799
Capabilities when using CAF . . . . . . . . . . . . . . . . . . 799
Task capabilities . . . . . . . . . . . . . . . . . . . . . . 800
Programming language . . . . . . . . . . . . . . . . . . . 800
For most DB2 users, static SQL, which is embedded in a host language program
and bound before the program runs, provides a straightforward, efficient path to
DB2 data. You can use static SQL when you know before run time what SQL
statements your application needs to execute.
Dynamic SQL prepares and executes the SQL statements within a program, while
the program is running. Four types of dynamic SQL are:
v Interactive SQL
A user enters SQL statements through SPUFI. DB2 prepares and executes those
statements as dynamic SQL statements.
v Embedded dynamic SQL
Your application puts the SQL source in host variables and includes PREPARE
and EXECUTE statements that tell DB2 to prepare and run the contents of those
host variables at run time. You must precompile and bind programs that include
embedded dynamic SQL.
v Deferred embedded SQL
Deferred embedded SQL statements are neither fully static nor fully dynamic.
Like static statements, deferred embedded SQL statements are embedded within
applications, but like dynamic statements, they are prepared at run time. DB2
processes deferred embedded SQL statements with bind-time rules. For
example, DB2 uses the authorization ID and qualifier determined at bind time as
the plan or package owner. Deferred embedded SQL statements are used for
DB2 private protocol access to remote data.
v Dynamic SQL executed through ODBC functions
Your application contains ODBC function calls that pass dynamic SQL statements
as arguments. You do not need to precompile and bind programs that use ODBC
function calls. See DB2 ODBC Guide and Reference for information about
ODBC.
“Choosing between static and dynamic SQL” on page 536 suggests some reasons
for choosing either static or dynamic SQL.
The rest of this chapter shows you how to code dynamic SQL in applications that
contain three types of SQL statements:
v “Dynamic SQL for non-SELECT statements” on page 545. Those statements
include DELETE, INSERT, and UPDATE.
v “Dynamic SQL for fixed-list SELECT statements” on page 552. A SELECT
statement is fixed-list if you know in advance the number and type of data items
in each row of the result.
v “Dynamic SQL for varying-list SELECT statements” on page 554. A SELECT
statement is varying-list if you cannot know in advance how many data items to
allow for or what their data types are.
In the example below, the UPDATE statement can update the salary of any
employee. At bind time, you know that salaries must be updated, but you do not
know until run time whose salaries should be updated, and by how much.
01 IOAREA.
02 EMPID PIC X(06).
. 02 NEW-SALARY PIC S9(7)V9(2) COMP-3.
.
.
(Other declarations)
READ CARDIN RECORD INTO IOAREA
. AT END MOVE ’N’ TO INPUT-SWITCH.
.
.
(Other COBOL statements)
EXEC SQL
UPDATE DSN8810.EMP
SET SALARY = :NEW-SALARY
WHERE EMPNO = :EMPID
END-EXEC.
The statement (UPDATE) does not change, nor does its basic structure, but the
input can change the results of the UPDATE statement.
One example of such a program is the DB2 Query Management Facility (DB2
QMF), which provides an alternative interface to DB2 that accepts almost any SQL
statement. SPUFI is another example; it accepts SQL statements from an input data
set, and then processes and executes them dynamically.
The time at which DB2 determines the access path depends on these factors:
v Whether the statement is executed statically or dynamically
v Whether the statement contains input host variables
| If you specify REOPT(NONE), DB2 determines the access path at bind time, just as
it does when there are no input variables.
| DB2 ignores REOPT(ONCE) for static SQL statements because DB2 can cache
| only dynamic SQL statements
| If you specify REOPT(ALWAYS), DB2 determines the access path at bind time and
again at run time, using the values in these types of input variables:
v Host variables
v Parameter markers
v Special registers
This means that DB2 must spend extra time determining the access path for
statements at run time, but if DB2 determines a significantly better access path
| Dynamic SQL statements with input host variables: When you bind applications
| that contain dynamic SQL statements with input host variables, use either the
| REOPT(ALWAYS) or REOPT(ONCE) option.
| Use REOPT(ALWAYS) when you are not using the dynamic statement cache. DB2
| determines the access path for statements at each EXECUTE or OPEN of the
| statement. This ensure the best access path for a statement, but using
| REOPT(ALWAYS) can increase the cost of frequently used dynamic SQL
| statements.
| Use REOPT(ONCE) when you are using the dynamic statements cache. DB2
| determines and the access path for statements only at the first EXECUTE or OPEN
| of the statement. It saves that access path in the dynamic statement cache and
| uses it until the statement is invalidated or removed from the cache. This reuse of
| the access path reduces the cost of frequently used dynamic SQL statements that
| contain input host variables.
| You should code your PREPARE statements to minimize overhead. With both
| REOPT(ALWAYS) and REOPT(ONCE), DB2 prepares an SQL statement at the
| same time as it processes OPEN or EXECUTE for the statement. That is, DB2
| processes the statement as if you specify DEFER(PREPARE). However, in the
| following cases, DB2 prepares the statement twice:
| v If you execute the DESCRIBE statement before the PREPARE statement in your
| program
| v If you use the PREPARE statement with the INTO parameter
| For the first prepare, DB2 determines the access path without using input variable
| values. For the second prepare, DB2 uses the input variable values to determine
| the access path. This extra prepare can decrease performance.
| If you specify REOPT(ALWAYS), DB2 prepares the statement twice each time it is
| run.
| If you specify REOPT(ONCE), DB2 prepares the statement twice only when the
| statement has never been saved in the cache. If the statement has been prepared
| and saved in the cache, DB2 will use the saved version of the statement to
| complete the DESCRIBE statement.
For a statement that uses a cursor, you can avoid the double prepare by placing
the DESCRIBE statement after the OPEN statement in your program.
| DB2 can save prepared dynamic statements in a cache. The cache is a dynamic
| statement cache pool that all application processes can use to save and retrieve
prepared dynamic statements. After an SQL statement has been prepared and is
automatically saved in the cache, subsequent prepare requests for that same SQL
statement can avoid the costly preparation process by using the statement that is in
the cache. Statements that are saved in the cache can be shared among different
threads, plans, or packages.
Eligible statements: The following SQL statements can be saved in the cache:
SELECT
UPDATE
INSERT
DELETE
Distributed and local SQL statements are eligible to be saved. Prepared, dynamic
statements that use DB2 private protocol access are also eligible to be saved.
Restrictions: Even though static statements that use DB2 private protocol access
are dynamic at the remote site, those statements can not be saved in the cache.
| Statements in plans or packages that are bound with REOPT(ALWAYS) can not be
| saved in the cache. Statements in plans and packages that are bound with
| REOPT(ONCE) can be saved in the cache. See “How bind options
The following conditions must be met before DB2 can use statement P1 instead of
preparing statement S2:
v S1 and S2 must be identical. The statements must pass a character by character
comparison and must be the same length. If the PREPARE statement for either
statement contains an ATTRIBUTES clause, DB2 concatenates the values in the
ATTRIBUTES clause to the statement string before comparing the strings. That
is, if A1 is the set of attributes for S1 and A2 is the set of attributes for S2, DB2
compares S1||A1 to S2||A2.
If the statement strings are not identical, DB2 cannot use the statement in the
cache.
For example, assume that S1 and S2 are specified as follows:
’UPDATE EMP SET SALARY=SALARY+50’
In this case, DB2 cannot use P1 for S2. DB2 prepares S2 and saves the
prepared version of S2 in the cache.
v The authorization ID that was used to prepare S1 must be used to prepare S2:
– When a plan or package has run behavior, the authorization ID is the current
SQLID value.
For secondary authorization IDs:
- The application process that searches the cache must have the same
secondary authorization ID list as the process that inserted the entry into
the cache or must have a superset of that list.
- If the process that originally prepared the statement and inserted it into the
cache used one of the privileges held by the primary authorization ID to
accomplish the prepare, that ID must either be part of the secondary
authorization ID list of the process searching the cache, or it must be the
primary authorization ID of that process.
– When a plan or package has bind behavior, the authorization ID is the plan
owner’s ID. For a DDF server thread, the authorization ID is the package
owner’s ID.
540 Application Programming and SQL Guide
– When a package has define behavior, then the authorization ID is the
user-defined function or stored procedure owner.
– When a package has invoke behavior, then the authorization ID is the
authorization ID under which the statement that invoked the user-defined
function or stored procedure executed.
For an explanation of bind, run, define, and invoke behavior, see “Using
DYNAMICRULES to specify behavior of dynamic SQL statements” on page 479.
v When the plan or package that contains S2 is bound, the values of these bind
options must be the same as when the plan or package that contains S1 was
bound:
CURRENTDATA
DYNAMICRULES
ISOLATION
SQLRULES
QUALIFIER
v When S2 is prepared, the values of the following special registers must be the
same as when S1 was prepared:
CURRENT DEGREE
CURRENT RULES
CURRENT PRECISION
| CURRENT REFRESH AGE
| CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION
Figure 158. Writing dynamic SQL to use the bind option KEEPDYNAMIC(YES)
To understand how the KEEPDYNAMIC bind option works, you need to differentiate
between the executable form of a dynamic SQL statement, which is the prepared
statement, and the character string form of the statement, which is the statement
string.
Figure 159. Using KEEPDYNAMIC(YES) when the dynamic statement cache is not active
When the dynamic statement cache is active, and you run an application bound
with KEEPDYNAMIC(YES), DB2 retains a copy of both the prepared statement and
the statement string. The prepared statement is cached locally for the application
process. In general, the statement is globally cached in the EDM pool, to benefit
other application processes. If the application issues an OPEN, EXECUTE, or
DESCRIBE after a commit operation, the application process uses its local copy of
the prepared statement to avoid a prepare and a search of the cache. Figure 160
illustrates this process.
Figure 160. Using KEEPDYNAMIC(YES) when the dynamic statement cache is active
The local instance of the prepared SQL statement is kept in ssnmDBM1 storage
until one of the following occurs:
v The application process ends.
v A rollback operation occurs.
v The application issues an explicit PREPARE statement with the same statement
name.
If the application does issue a PREPARE for the same SQL statement name that
has a kept dynamic statement associated with it, the kept statement is discarded
and DB2 prepares the new statement.
v The statement is removed from memory because the statement has not been
used recently, and the number of kept dynamic SQL statements reaches the
subsystem default as set during installation.
The KEEPDYNAMIC option has performance implications for DRDA clients that
specify WITH HOLD on their cursors:
v If KEEPDYNAMIC(NO) is specified, a separate network message is required
when the DRDA client issues the SQL CLOSE for the cursor.
v If KEEPDYNAMIC(YES) is specified, the DB2 UDB for z/OS server automatically
closes the cursor when SQLCODE +100 is detected, which means that the client
does not have to send a separate message to close the held cursor. This
reduces network traffic for DRDA applications that use held cursors. It also
reduces the duration of locks that are associated with the held cursor.
Considerations for data sharing: If one member of a data sharing group has
enabled the cache but another has not, and an application is bound with
KEEPDYNAMIC(YES), DB2 must implicitly prepare the statement again if the
statement is assigned to a member without the cache. This can mean a slight
reduction in performance.
The governor controls only the dynamic SQL manipulative statements SELECT,
UPDATE, DELETE, and INSERT. Each dynamic SQL statement used in a program
is subject to the same limits. The limit can be a reactive governing limit or a
predictive governing limit. If the statement exceeds a reactive governing limit, the
statement receives an error SQL code. If the statement exceeds a predictive
governing limit, it receives a warning or error SQL code. “Writing an application to
handle predictive governing” on page 544 explains more about predictive governing
SQL codes.
Your system administrator can establish the limits for individual plans or packages,
for individual users, or for all users who do not have personal limits.
Follow the procedures defined by your location for adding, dropping, or modifying
entries in the resource limit specification table. For more information about the
resource limit specification tables, see Part 5 (Volume 2) of DB2 Administration
Guide.
If the failed statement involves an SQL cursor, the cursor’s position remains
unchanged. The application can then close that cursor. All other operations with the
cursor do not run and the same SQL error code occurs.
If the failed SQL statement does not involve a cursor, then all changes that the
statement made are undone before the error code returns to the application. The
application can either issue another SQL statement or commit all work done so far.
For information about setting up the resource limit facility for predictive governing,
see Part 5 (Volume 2) of DB2 Administration Guide.
Normally with deferred prepare, the PREPARE, OPEN, and first FETCH of the data
are returned to the requester. For a predictive governor warning of +495, you would
ideally like to have the option to choose beforehand whether you want the OPEN
and FETCH of the data to occur. For downlevel requesters, you do not have this
option.
If your application does defer prepare processing, the application receives the +495
at its usual time (OPEN or PREPARE). If you have parameter markers with
deferred prepare, you receive the +495 at OPEN time as you normally do.
However, an additional message is exchanged.
All SQL in REXX programs is dynamic SQL. For information about how to write
SQL REXX applications, see “Coding SQL statements in a REXX application” on
page 232
Most of the examples in this section are in PL/I. “Using dynamic SQL in COBOL” on
page 568 shows techniques for using COBOL. Longer examples in the form of
complete programs are available in the sample applications:
DSNTEP2
Processes both SELECT and non-SELECT statements dynamically. (PL/I).
DSNTIAD
Processes only non-SELECT statements dynamically. (Assembler).
DSNTIAUL
Processes SELECT statements dynamically. (Assembler).
Library prefix.SDSNSAMP contains the sample programs. You can view the
programs online, or you can print them using ISPF, IEBPTPCH, or your own printing
program.
Recall that you must prepare (precompile and bind) static SQL statements before
you can use them. You cannot prepare dynamic SQL statements in advance. The
SQL statement EXECUTE IMMEDIATE causes an SQL statement to prepare and
execute, dynamically, at run time.
For more information about declaring character-string host variables, see Chapter 9,
“Embedding SQL statements in host languages,” on page 129.
| The precompiler generates a structure that contains two elements, a 4-byte length
| field and a data field of the specified length. The names of these fields vary
| depending on the host language:
| v In PL/I, assembler, and Fortran, the names are variable_LENGTH and
| variable_DATA.
| v In COBOL, the names are variable–LENGTH and variable–DATA.
| v In C, the names are variable.LENGTH and variable.DATA.
| Example: Using a CLOB host variable: This excerpt is from a C program that
| copies an UPDATE statement into the host variable string1 and executes the
| statement:
| EXEC SQL BEGIN DECLARE SECTION;
| ...
| SQL TYPE IS CLOB(4k) string1;
| EXEC SQL END DECLARE SECTION;
| ...
| /* Copy a statement into the host variable string1. */
| strcpy(string1.data, "UPDATE DSN8610.EMP SET SALARY = SALARY * 1.1");
| string1.length = 44;
| EXEC SQL EXECUTE IMMEDIATE :string1;
| ...
If you know in advance that you will use only the DELETE statement and only the
table DSN8810.EMP, you can use the more efficient static SQL. Suppose further
that several different tables have rows that are identified by employee numbers, and
that users enter a table name as well as a list of employee numbers to delete.
Although variables can represent the employee numbers, they cannot represent the
table name, so you must construct and execute the entire statement dynamically.
Your program must now do these things differently:
v Use parameter markers instead of host variables
v Use the PREPARE statement
v Use EXECUTE instead of EXECUTE IMMEDIATE
You can indicate to DB2 that a parameter marker represents a host variable of a
certain data type by specifying the parameter marker as the argument of a CAST
function. When the statement executes, DB2 converts the host variable to the data
type in the CAST function. A parameter marker that you include in a CAST function
is called a typed parameter marker. A parameter marker without a CAST function is
called an untyped parameter marker.
Example using parameter markers: Suppose that you want to prepare this
statement:
DELETE FROM DSN8810.EMP WHERE EMPNO = :EMP;
You associate host variable :EMP with the parameter marker when you execute the
prepared statement. Suppose that S1 is the prepared statement. Then the
EXECUTE statement looks like this:
EXECUTE S1 USING :EMP;
Example using the PREPARE statement: Assume that the character host variable
:DSTRING has the value “DELETE FROM DSN8810.EMP WHERE EMPNO = ?”.
To prepare an SQL statement from that string and assign it the name S1, write:
EXEC SQL PREPARE S1 FROM :DSTRING;
The prepared statement still contains a parameter marker, for which you must
supply a value when the statement executes. After the statement is prepared, the
table name is fixed, but the parameter marker allows you to execute the same
statement many times with different values of the employee number.
You can now write an equivalent example for a dynamic SQL statement:
< Read a statement containing parameter markers into DSTRING.>
EXEC SQL PREPARE S1 FROM :DSTRING;
< Read a value for EMP from the list. >
DO UNTIL (EMPNO = 0);
EXEC SQL EXECUTE S1 USING :EMP;
< Read a value for EMP from the list. >
END;
The PREPARE statement prepares the SQL statement and calls it S1. The
EXECUTE statement executes S1 repeatedly, using different values for EMP.
| You might be entering the rows of data into different tables or entering different
| numbers of rows, so you want to construct the INSERT statement dynamically. This
| section describes the following methods to execute a multiple-row INSERT
| statement dynamically:
| v By using host variable arrays that contain the data to be inserted
| You must specify the FOR n ROWS clause on the EXECUTE statement.
| Preparing and executing the statement: The code to prepare and execute the
| INSERT statement looks like this:
| /* Copy the INSERT string into the host variable sqlstmt */
| strcpy(sqlstmt, "INSERT INTO DSN8810.ACT VALUES (CAST(? AS SMALLINT),");
| strcat(sqlstmt, " CAST(? AS CHAR(6)), CAST(? AS VARCHAR(20)))");
|
| /* Copy the INSERT attributes into the host variable attrvar */
| strcpy(attrvar, "FOR MULTIPLE ROWS");
|
| /* Prepare and execute my_insert using the host variable arrays */
| EXEC SQL PREPARE my_insert ATTRIBUTES :attrvar FROM :sqlstmt;
| EXEC SQL EXECUTE my_insert USING :hva1, :hva2, :hva3 FOR :num_rows ROWS;
| Each host variable in the USING clause of the EXECUTE statement represents an
| array of values for the corresponding column of the target of the INSERT statement.
| You can vary the number of rows, specified by num_rows in the example, without
| needing to prepare the INSERT statement again.
| You must specify the FOR n ROWS clause on the EXECUTE statement.
| Setting the fields in the SQLDA: Assume that your program includes the
| standard SQLDA structure declaration and declarations for the program variables
| that point to the SQLDA structure. Before the INSERT statement is prepared and
| executed, you must set the fields in the SQLDA structure for your INSERT
| statement. For C application programs, the code to set the fields looks like this:
| strcpy(sqldaptr->sqldaid,"SQLDA");
| sqldaptr->sqldabc = 192; /* number of bytes of storage allocated for the SQLDA */
| sqldaptr->sqln = 4; /* number of SQLVAR occurrences */
| sqldaptr->sqld = 4;
| varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0])); /* Point to first SQLVAR */
| varptr->sqltype = 500; /* data type SMALLINT */
| varptr->sqllen = 2;
| varptr->sqldata = (char *) hva1;
| varptr->sqlname.length = 8;
| varptr->sqlname.data = X’0000000000000014’; /* bytes 5-8 hva size */
| varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 1); /* Point to next SQLVAR */
| varptr->sqltype = 452; /* data type CHAR(6) */
| varptr->sqllen = 6;
| varptr->sqldata = (char *) hva2;
| varptr->sqlname.length = 8;
| varptr->sqlname.data = X’0000000000000014’; /* bytes 5-8 hva size */
| varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 2); /* Point to next SQLVAR */
| varptr->sqltype = 448; /* data type VARCHAR(20) */
| Preparing and executing the statement: The code to prepare and execute the
| INSERT statement looks like this:
| /* Copy the INSERT string into the host variable sqlstmt */
| strcpy(sqlstmt, "INSERT INTO DSN8810.ACT VALUES (?, ?, ?)");
|
| /* Copy the INSERT attributes into the host variable attrvar */
| strcpy(attrvar, "FOR MULTIPLE ROWS");
|
| /* Prepare and execute my_insert using the descriptor */
| EXEC SQL PREPARE my_insert ATTRIBUTES :attrvar FROM :sqlstmt;
| EXEC SQL EXECUTE my_insert USING DESCRIPTOR :*sqldaptr FOR :num_rows ROWS;
| The host variable in the USING clause of the EXECUTE statement names the
| SQLDA that describes the parameter markers in the INSERT statement.
Before you execute DESCRIBE INPUT, you must allocate an SQLDA with enough
instances of SQLVAR to represent all parameter markers in the SQL statements
you want to describe.
After you execute DESCRIBE INPUT, you code the application in the same way as
any other application in which you execute a prepared statement using an SQLDA.
Example using the SQLDA: Suppose that you want to execute this statement
dynamically:
DELETE FROM DSN8810.EMP WHERE EMPNO = ?
The term “fixed-list” does not imply that you must know in advance how many rows
of data will be returned. However, you must know the number of columns and the
data types of those columns. A fixed-list SELECT statement returns a result table
that can contain any number of rows; your program looks at those rows one at a
time, using the FETCH statement. Each successive fetch returns the same number
of values as the last, and the values have the same data types each time.
Therefore, you can specify host variables as you do for static SQL.
An advantage of the fixed-list SELECT is that you can write it in any of the
programming languages that DB2 supports. Varying-list dynamic SELECT
statements require assembler, C, PL/I, and versions of COBOL other than OS/VS
COBOL.
For a sample program that is written in C and that illustrates dynamic SQL with
fixed-list SELECT statements, see Figure 239 on page 944.
Suppose that your program retrieves last names and phone numbers by
dynamically executing SELECT statements of this form:
SELECT LASTNAME, PHONENO FROM DSN8810.EMP
WHERE ... ;
The program reads the statements from a terminal, and the user determines the
WHERE clause.
ATTRVAR contains attributes that you want to add to the SELECT statement, such
as FETCH FIRST 10 ROWS ONLY or OPTIMIZE for 1 ROW. In general, if the
SELECT statement has attributes that conflict with the attributes in the PREPARE
statement, the attributes on the SELECT statement take precedence over the
attributes on the PREPARE statement. However, in this example, the SELECT
statement in DSTRING has no attributes specified, so DB2 uses the attributes in
ATTRVAR for the SELECT statement.
To execute STMT, your program must open the cursor, fetch rows from the result
table, and close the cursor. The following sections describe how to do those steps.
The key feature of this statement is the use of a list of host variables to receive the
values returned by FETCH. The list has a known number of items (in this case, two
items, :NAME and :PHONE) of known data types (both are character strings, of
lengths 15 and 4, respectively).
You can use this list in the FETCH statement only because you planned the
program to use only fixed-list SELECTs. Every row that cursor C1 points to must
contain exactly two character values of appropriate length. If the program is to
handle anything else, it must use the techniques described under “Dynamic SQL for
varying-list SELECT statements.”
Now, the program must find out whether the statement is a SELECT. If it is, the
program must also find out how many values are in each row, and what their data
types are. The information comes from an SQL descriptor area (SQLDA).
For a complete layout of the SQLDA and the descriptions given by INCLUDE
statements, see Appendix C of DB2 SQL Reference.
A program that admits SQL statements of every kind for dynamic execution has two
choices:
v Provide the largest SQLDA that it could ever need. The maximum number of
columns in a result table is 750, so an SQLDA for 750 columns occupies 33 016
bytes for a single SQLDA, 66 016 bytes for a double SQLDA, or 99 016 bytes for
a triple SQLDA. Most SELECT statements do not retrieve 750 columns, so the
program does not usually use most of that space.
v Provide a smaller SQLDA, with fewer occurrences of SQLVAR. From this the
program can find out whether the statement was a SELECT and, if it was, how
many columns are in its result table. If more columns are in the result than the
SQLDA can hold, DB2 returns no descriptions. When this happens, the program
must acquire storage for a second SQLDA that is long enough to hold the
column descriptions, and ask DB2 for the descriptions again. Although this
technique is more complicated to program than the first, it is more general.
How many columns should you allow? You must choose a number that is large
enough for most of your SELECT statements, but not too wasteful of space; 40 is
a good compromise. To illustrate what you must do for statements that return
more columns than allowed, the example in this discussion uses an SQLDA that
is allocated for at least 100 columns.
Equivalently, you can use the INTO clause in the PREPARE statement:
EXEC SQL
PREPARE STMT INTO :MINSQLDA FROM :DSTRING;
(If the statement does contain parameter markers, you must use an SQL descriptor
area; for instructions, see “Executing arbitrary statements with parameter markers”
on page 565.)
Figure 163 on page 559 shows an SQLDA that describes two columns that are not
LOB columns or distinct type columns. See “Describing tables with LOB and distinct
type columns” on page 563 for an example of describing a result table with LOB
columns or distinct type columns.
In Figure 164, the DESCRIBE statement inserted all the values except the first
occurrence of the number 200. The program inserted the number 200 before it
executed DESCRIBE to tell how many occurrences of SQLVAR to allow. If the result
table of the SELECT has more columns than this, the SQLVAR fields describe
nothing.
The first SQLVAR pertains to the first column of the result table (the WORKDEPT
column). SQLVAR element 1 contains fixed-length character strings and does not
allow null values (SQLTYPE=452); the length attribute is 3. For information about
SQLTYPE values, see Appendix C of DB2 SQL Reference.
Figure 165 on page 560 shows the SQLDA after your program acquires storage for
the column values and their indicators, and puts the addresses in the SQLDATA
fields of the SQLDA.
Figure 166 shows the SQLDA after your program executes a FETCH statement.
Figure 165 on page 560 shows the content of the descriptor area before the
program obtains any rows of the result table. Addresses of fields and indicator
variables are already in the SQLVAR.
You can set the default application encoding scheme for a plan or package by
specifying the value in the APPLICATION ENCODING field of the panel DEFAULTS
FOR BIND PACKAGE or DEFAULTS FOR BIND PLAN. The default application
encoding scheme for the DB2 subsystem is the value that was specified in the
APPLICATION ENCODING field of installation panel DSNTIPF.
If you want to retrieve the data in an encoding scheme and CCSID other than the
default values, you can use one of the following techniques:
v For dynamic SQL, set the CURRENT APPLICATION ENCODING SCHEME
special register before you execute the SELECT statements. For example, to set
the CCSID and encoding scheme for retrieved data to the default CCSID for
Unicode, execute this SQL statement:
EXEC SQL SET CURRENT APPLICATION ENCODING SCHEME =’UNICODE’;
The initial value of this special register is the application encoding scheme that is
determined by the BIND option.
v For static and dynamic SQL statements that use host variables and host variable
arrays, use the DECLARE VARIABLE statement to associate CCSIDs with the
host variables into which you retrieve the data. See “Changing the coded
character set ID of host variables” on page 77 for information about this
technique.
v For static and dynamic SQL statements that use a descriptor, set the CCSID for
the retrieved data in the SQLDA. The following text describes that technique.
To change the encoding scheme for SQL statements that use a descriptor, set up
the SQLDA, and then make these additional changes to the SQLDA:
1. Put the character + in the sixth byte of field SQLDAID.
2. For each SQLVAR entry:
v Set the length field of SQLNAME to 8.
v Set the first two bytes of the data field of SQLNAME to X'0000'.
v Set the third and fourth bytes of the data field of SQLNAME to the CCSID, in
hexadecimal, in which you want the results to display. You can specify any
CCSID that meets either of the following conditions:
– A row in catalog table SYSSTRINGS has a matching value for
OUTCCSID.
For example, suppose that the table that contains WORKDEPT and PHONENO is
defined with CCSID ASCII. To retrieve data for columns WORKDEPT and
PHONENO in ASCII CCSID 437 (X'01B5'), change the SQLDA as shown in
Figure 167.
Figure 167. SQL descriptor area for retrieving data in ASCII CCSID 437
In this case, SQLNAME contains nothing for a column with no label. If you prefer to
use labels wherever they exist, but column names where there are no labels, write
USING ANY. (Some columns, such as those derived from functions or expressions,
have neither name nor label; SQLNAME contains nothing for those columns.
However, if the column is the result of a UNION, SQLNAME contains the names of
the columns of the first operand of the UNION.)
You can also write USING BOTH to obtain the name and the label when both exist.
However, to obtain both, you need a second set of occurrences of SQLVAR in
FULSQLDA. The first set contains descriptions of all the columns using names; the
second set contains descriptions using labels. This means that you must allocate a
longer SQLDA for the second DESCRIBE statement ((16 + SQLD * 88 bytes)
instead of (16 + SQLD * 44)). You must also put double the number of columns
(SLQD * 2) in the SQLN field of the second SQLDA. Otherwise, if not enough
space is available, DESCRIBE does not enter descriptions of any of the columns.
To illustrate this, suppose that you want to execute this SELECT statement:
SELECT USER, A_DOC FROM DOCUMENTS;
The USER column cannot contain nulls and is of distinct type ID, defined like this:
CREATE DISTINCT TYPE SCHEMA1.ID AS CHAR(20);
The result table for this statement has two columns, but you need four SQLVAR
occurrences in your SQLDA because the result table contains a LOB type and a
distinct type. Suppose that you prepare and describe this statement into
FULSQLDA, which is large enough to hold four SQLVAR occurrences. FULSQLDA
looks like Figure 168.
Figure 168. SQL descriptor area after describing a CLOB and distinct type
The next steps are the same as for result tables without LOBs or distinct types:
1. Analyze each SQLVAR description to determine the maximum amount of space
you need for the column value.
For a LOB type, retrieve the length from the SQLLONGL field instead of the
SQLLEN field.
2. Derive the address of some storage area of the required size.
For a LOB data type, you also need a 4-byte storage area for the length of the
LOB data. You can allocate this 4-byte area at the beginning of the LOB data or
in a different location.
3. Put this address in the SQLDATA field.
For a LOB data type, if you allocated a separate area to hold the length of the
LOB data, put the address of the length field in SQLDATAL. If the length field is
at beginning of the LOB data area, put 0 in SQLDATAL.
4. If the SQLTYPE field indicates that the value can be null, the program must also
put the address of an indicator variable in the SQLIND field.
Figure 169 on page 564 shows the contents of FULSQLDA after you fill in pointers
to the storage locations.
Figure 170 shows the contents of FULSQLDA after you execute a FETCH
statement.
Figure 170. SQL descriptor area after executing FETCH on a table with CLOB and distinct
type columns
For cases when there are parameter markers, see “Executing arbitrary statements
with parameter markers” on page 565 below.
The key feature of this statement is the clause USING DESCRIPTOR :FULSQLDA.
That clause names an SQL descriptor area in which the occurrences of SQLVAR
point to other areas. Those other areas receive the values that FETCH returns. It is
possible to use that clause only because you previously set up FULSQLDA to look
like Figure 164 on page 559.
Figure 166 on page 560 shows the result of the FETCH. The data areas identified
in the SQLVAR fields receive the values from a single row of the result table.
Successive executions of the same FETCH statement put values from successive
rows of the result table into these same areas.
When COMMIT ends the unit of work containing OPEN, the statement in STMT
reverts to the unprepared state. Unless you defined the cursor using the WITH
HOLD option, you must prepare the statement again before you can reopen the
cursor.
In both cases, the number and types of host variables named must agree with the
number of parameter markers in STMT and the types of parameter they represent.
Chapter 24. Coding dynamic SQL in application programs 565
The first variable (VAR1 in the examples) must have the type expected for the first
parameter marker in the statement, the second variable must have the type
expected for the second marker, and so on. There must be at least as many
variables as parameter markers.
The structure of DPARM is the same as that of any other SQLDA. The number of
occurrences of SQLVAR can vary, as in previous examples. In this case, every
parameter marker must have one SQLVAR. Each occurrence of SQLVAR describes
one host variable that replaces one parameter marker at run time. DB2 replaces the
parameter markers when a non-SELECT statement executes or when a cursor is
opened for a SELECT statement.
You must fill in certain fields in DPARM before using EXECUTE or OPEN; you can
ignore the other fields.
Field Use when describing host variables for parameter markers
SQLDAID The seventh byte indicates whether more than one SQLVAR entry
is used for each parameter marker. If this byte is not blank, at least
one parameter marker represents a distinct type or LOB value, so
the SQLDA has more than one set of SQLVAR entries.
You do not set this field for a REXX SQLDA.
SQLDABC The length of the SQLDA, which is equal to SQLN * 44 + 16. You
do not set this field for a REXX SQLDA.
SQLN The number of occurrences of SQLVAR allocated for DPARM. You
do not set this field for a REXX SQLDA.
SQLD The number of occurrences of SQLVAR actually used. This number
must not be less than the number of parameter markers. In each
occurrence of SQLVAR, put information in the following fields:
SQLTYPE, SQLLEN, SQLDATA, SQLIND.
SQLTYPE The code for the type of variable, and whether it allows nulls.
SQLLEN The length of the host variable.
SQLDATA The address of the host variable.
For REXX, this field contains the value of the host variable.
SQLIND The address of an indicator variable, if needed.
For REXX, this field contains a negative number if the value in
SQLDATA is null.
SQLNAME Ignore.
| When you specify the bind option REOPT(ONCE), DB2 optimizes the access path
| only once, at the first EXECUTE or OPEN, for SQL statements that contain host
| variables, parameter markers, or special registers. The option REOPT(ONCE) has
| the following effects on dynamic SQL statements:
| v When you specify the option REOPT(ONCE), DB2 automatically uses
| DEFER(PREPARE), which means that DB2 waits to prepare a statement until it
| encounters an OPEN or EXECUTE statement.
| v When DB2 prepares a statement using REOPT(ONCE), it saves the access path
| in the dynamic statement cache. This access path is used each time the
| statement is run, until the statement that is in the cache is invalidated (or
| removed from the cache) and needs to be rebound.
| v The DESCRIBE statement has the following effects on dynamic statements that
| are bound with REOPT(ONCE):
| – When you execute a DESCRIBE statement before an EXECUTE statement
| on a non-SELECT statement, DB2 prepares the statement twice if it is not
| already saved in the cache: Once for the DESCRIBE statement and once for
| the EXECUTE statement. DB2 uses the values of the input variables only
| during the second time the statement is prepared. It then saves the statement
| in the cache. If you execute a DESCRIBE statement before an EXECUTE
This chapter contains information that applies to all stored procedures and specific
information about stored procedures in languages other than Java. For information
about writing, preparing, and running Java stored procedures, see DB2 Application
Programming Guide and Reference for Java.
Consider using stored procedures for a client/server application that does at least
one of the following things:
v Executes multiple remote SQL statements.
Remote SQL statements can create many network send and receive operations,
which results in increased processor costs.
Stored procedures can encapsulate many of your application’s SQL statements
into a single message to the DB2 server, reducing network traffic to a single send
and receive operation for a series of SQL statements.
Locks on DB2 tables are not held across network transmissions, which reduces
contention for resources at the server.
v Accesses tables from a dynamic SQL environment where table privileges for the
application that is running are undesirable.
Stored procedures allow static SQL authorization from a dynamic environment.
v Accesses host variables for which you want to guarantee security and integrity.
Stored procedures remove SQL applications from the workstation, which prevents
workstation users from manipulating the contents of sensitive SQL statements
and host variables.
v Creates a result set of rows to return to the client application.
Figure 171 on page 570 and Figure 172 on page 570 illustrate the difference
between using stored procedures and not using stored procedures for processing in
a client/server environment.
Figure 172 shows processing with stored procedures. The same series of SQL
statements that are illustrated in Figure 171 uses a single send and receive
operation, reducing network traffic and the cost of processing these statements.
z/OS system
DB2 stored
Client EXEC SQL DB2 procedures region
CALL PROCX
Schedule EXEC SQL
PROCX DECLARE C1...
EXEC SQL
Perform SQL OPEN C1...
EXEC SQL
Perform SQL UPDATE...
EXEC SQL
Perform SQL INSERT...
The system administrator needs to perform these tasks to prepare the DB2
subsystem to run stored procedures:
| v Move existing stored procedures to a WLM environment, or set up WLM
| environments for new stored procedures.
| You can run only existing stored procedures in a DB2-established stored
| procedure address space; the support for this type of address space is being
| deprecated. If you are currently using DB2-established address spaces, see
| “Moving stored procedures to a WLM-established environment (for system
| administrators)” on page 580 for information about what needs to be done.
v Define JCL procedures for the stored procedures address spaces.
Member DSNTIJMV of data set DSN810.SDSNSAMP contains sample JCL
procedures for starting WLM-established and DB2-established address spaces. If
you enter a WLM procedure name or a DB2 procedure name in installation panel
DSNTIPX, DB2 customizes a JCL procedure for you. See Part 2 of DB2
Installation Guide for details.
v For WLM-established address spaces, define WLM application environments for
groups of stored procedures and associate a JCL startup procedure with each
application environment.
See Part 5 (Volume 2) of DB2 Administration Guide for information about how to
do this.
v If you plan to execute stored procedures that use the ODBA interface to access
IMS databases, modify the startup procedures for the address spaces in which
those stored procedures will run in the following way:
– Add the data set name of the IMS data set that contains the ODBA callable
interface code (usually IMS.RESLIB) to the end of the STEPLIB
concatenation.
– After the STEPLIB DD statement, add a DFSRESLB DD statement that
names the IMS data set that contains the ODBA callable interface code.
v If you plan to execute LANGUAGE JAVA stored procedures, set up the JCL and
install the software prerequisites, as described in DB2 Application Programming
Guide and Reference for Java.
v Install Language Environment and the appropriate compilers.
See z/OS Language Environment Customization for information about installing
Language Environment.
See “Language requirements for the stored procedure and its caller” on page 581
for minimum compiler and Language Environment requirements.
The system administrator needs to perform these tasks for each stored procedure:
v Be sure that the library in which the stored procedure resides is the STEPLIB
concatenation of the startup procedure for the stored procedures address space.
v Use the CREATE PROCEDURE statement to define the stored procedure to DB2
and ALTER PROCEDURE to modify the definition.
See “Defining your stored procedure to DB2” on page 575 for details.
v Perform security tasks for the stored procedure.
Notes:
1. This value is invalid for a REXX stored procedure.
| 2. This value is ignored for a REXX stored procedure. Specifying PROGRAM TYPE SUB with REXX will not result in
| an error; however, a value of MAIN will be stored in the DB2 catalog and used at runtime.
3. This value is ignored for a REXX stored procedure.
| 4. DBINFO is valid only with PARAMETER STYLE SQL.
See “Linkage conventions” on page 623 for an example of coding the DBINFO
parameter list in a stored procedure.
Later, you need to make the following changes to the stored procedure definition:
v It selects data from DB2 tables but does not modify DB2 data.
v The parameters can have null values, and the stored procedure can return a
diagnostic string.
v The length of time the stored procedure runs should not be limited.
v If the stored procedure is called by another stored procedure or a user-defined
function, the stored procedure uses the WLM environment of the caller.
Execute this ALTER PROCEDURE statement to make the necessary changes:
ALTER PROCEDURE B
READS SQL DATA
ASUTIME NO LIMIT
| PARAMETER STYLE SQL
WLM ENVIRONMENT (PAYROLL,*);
The method that you use to perform these tasks depends on whether you are using
WLM-established or DB2-established address spaces.
Two types of stored procedures are external stored procedures and SQL
procedures:
v External stored procedures are written in a host language. The source code for
an external stored procedure is separate from the definition for the stored
procedure. An external stored procedure is much like any other SQL application.
It can include static or dynamic SQL statements, IFI calls, and DB2 commands
issued through IFI.
v SQL procedures are written using SQL procedures statements, which are part of
a CREATE PROCEDURE statement.
This section discusses writing and preparing external stored procedures. “Writing
and preparing an SQL procedure” on page 597 discusses writing and preparing
SQL procedures.
The program that calls the stored procedure can be in any language that supports
the SQL CALL statement. ODBC applications can use an escape clause to pass a
stored procedure call to DB2.
If the stored procedure calls other programs that contain SQL statements, each of
those called programs must have a DB2 package. The owner of the package or
plan that contains the CALL statement must have EXECUTE authority for all
packages that the other programs use.
When a stored procedure calls another program, DB2 determines which collection
the called program’s package belongs to in one of the following ways:
| v If the stored procedure definition contains COLLID collection-id, DB2 uses
| collection-id.
| v If the stored procedure executes SET CURRENT PACKAGE PATH and contains
| the NO COLLID option, the called program’s package comes from the list of
| collections in the CURRENT PACKAGE PATH special register. For example, if
| CURRENT PACKAGE PATH contains the list COLL1, COLL2, COLL3, COLL4,
| DB2 searches for the first package (in the order of the list) that exists in these
| collections.
| v If the stored procedure does not execute SET CURRENT PACKAGE PATH and
| instead executes SET CURRENT PACKAGESET, the called program’s package
| comes from the collection that is specified in the CURRENT PACKAGESET
| special register.
| v If the stored procedure does not execute SET CURRENT PACKAGE PATH, SET
| CURRENT PACKAGESET, and if the stored procedure definition contains the NO
| COLLID option, DB2 uses the collection ID of the package that contains the SQL
| statement CALL.
| When control returns from the stored procedure, DB2 restores the value of the
| CURRENT PACKAGESET special register to the value it contained before the
| client program executed the SQL statement CALL.
/******************************************************************/
/* This C subprogram is a stored procedure that uses linkage */
/* convention GENERAL and receives 3 parameters. */
/******************************************************************/
#pragma linkage(cfunc,fetchable)
#include <stdlib.h>
void cfunc(char p1[11],long *p2,short *p3)
{
/****************************************************************/
/* Declare variables used for SQL operations. These variables */
/* are local to the subprogram and must be copied to and from */
/* the parameter list for the stored procedure call. */
/****************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char parm1[11];
long int parm2;
short int parm3;
EXEC SQL END DECLARE SECTION;
/*************************************************************/
/* Receive input parameter values into local variables. */
/*************************************************************/
strcpy(parm1,p1);
parm2 = *p2;
parm3 = *p3;
/*************************************************************/
/* Perform operations on local variables. */
/*************************************************************/
.
.
.
/*************************************************************/
/* Set values to be passed back to the caller. */
/*************************************************************/
strcpy(parm1,"SETBYSP");
parm2 = 100;
parm3 = 200;
/*************************************************************/
/* Copy values to output parameters. */
/*************************************************************/
strcpy(p1,parm1);
*p2 = parm2;
*p3 = parm3;
}
Figure 176 on page 585 shows an example of coding a C++ stored procedure as a
subprogram.
/*************************************************************/
/* Receive input parameter values into local variables. */
/*************************************************************/
strcpy(parm1,p1);
parm2 = *p2;
parm3 = *p3;
/*************************************************************/
/* Perform operations on local variables. */
/*************************************************************/
.
.
.
/*************************************************************/
/* Set values to be passed back to the caller. */
/*************************************************************/
strcpy(parm1,"SETBYSP");
parm2 = 100;
parm3 = 200;
/*************************************************************/
/* Copy values to output parameters. */
/*************************************************************/
strcpy(p1,parm1);
*p2 = parm2;
*p3 = parm3;
}
You cannot include ROLLBACK statements in a stored procedure if DB2 is not the
commit coordinator.
Table 77 shows information that you need to use special registers in a stored
procedure.
Table 77. Characteristics of special registers in a stored procedure
Initial value when Initial value when Function
INHERIT SPECIAL DEFAULT SPECIAL can use
REGISTERS option is REGISTERS option is SET to
Special register specified specified modify?
| CURRENT Inherited from invoking Inherited from invoking Not
| CLIENT_ACCTNG application application applicable5
| CURRENT Inherited from invoking Inherited from invoking Not
| CLIENT_APPLNAME application application applicable5
Notes:
1. If the ENCODING bind option is not specified, the initial value is the value that was
specified in field APPLICATION ENCODING of installation panel DSNTIPF.
2. If the stored procedure is invoked within the scope of a trigger, DB2 uses the timestamp
for the triggering SQL statement as the timestamp for all SQL statements in the function
package.
3. DB2 allows parallelism at only one level of a nested SQL statement. If you set the value
of the CURRENT DEGREE special register to ANY, and parallelism is disabled, DB2
ignores the CURRENT DEGREE value.
4. If the stored procedure definer specifies a value for COLLID in the CREATE FUNCTION
statement, DB2 sets CURRENT PACKAGESET to the value of COLLID.
5. Not applicable because no SET statement exists for the special register.
6. If a program within the scope of the invoking application issues a SET statement for the
special register before the stored procedure is invoked, the special register inherits the
value from the SET statement. Otherwise, the special register contains the value that is
set by the bind option for the stored procedure package.
7. If a program within the scope of the invoking application issues a SET CURRENT SQLID
statement before the stored procedure is invoked, the special register inherits the value
from the SET statement. Otherwise, CURRENT SQLID contains the authorization ID of
the application process.
8. If the stored procedure package uses a value other than RUN for the DYNAMICRULES
bind option, the SET CURRENT SQLID statement can be executed but does not affect
the authorization ID that is used for the dynamic SQL statements in the stored procedure
package. The DYNAMICRULES value determines the authorization ID that is used for
dynamic SQL statements. See “Using DYNAMICRULES to specify behavior of dynamic
SQL statements” on page 479 for more information about DYNAMICRULES values and
authorization IDs.
When a local DB2 application calls a stored procedure, the stored procedure cannot
have DB2 private protocol access to any DB2 sites already connected to the calling
program by DRDA access.
The local DB2 application cannot use DRDA access to connect to any location that
the stored procedure has already accessed using DB2 private protocol access.
Before making the DB2 private protocol connection, the local DB2 application must
first execute the RELEASE statement to terminate the DB2 private protocol
connection, and then commit the unit of work.
ODBA support uses RRS for syncpoint control of DB2 and IMS resources.
Therefore, stored procedures that use ODBA can run only in WLM-established
stored procedures address spaces.
When you write a stored procedure that uses ODBA, follow the rules for writing an
IMS application program that issues DL/I calls. See IMS Application Programming:
Database Manager and IMS Application Programming: Transaction Manager for
information about writing DL/I applications.
IMS work that is performed in a stored procedure is in the same commit scope as
the stored procedure. As with any other stored procedure, the calling application
commits work.
A stored procedure that uses ODBA must issue a DPSB PREP call to deallocate a
PSB when all IMS work under that PSB is complete. The PREP keyword tells IMS
to move inflight work to an indoubt state. When work is in the indoubt state, IMS
does not require activation of syncpoint processing when the DPSB call is
executed. IMS commits or backs out the work as part of RRS two-phase commit
when the stored procedure caller executes COMMIT or ROLLBACK.
A sample COBOL stored procedure and client program demonstrate accessing IMS
data using the ODBA interface. The stored procedure source code is in member
DSN8EC1 and is prepared by job DSNTEJ61. The calling program source code is
in member DSN8EC1 and is prepared and executed by job DSNTEJ62. All code is
in data set DSN810.SDSNSAMP.
The startup procedure for a stored procedures address space in which stored
procedures that use ODBA run must include a DFSRESLB DD statement and an
extra data set in the STEPLIB concatenation. See “Setting up the stored procedures
environment” on page 574 for more information.
For each result set you want returned, your stored procedure must:
v Declare a cursor with the option WITH RETURN.
v Open the cursor.
v If the cursor is scrollable, ensure that the cursor is positioned before the first row
of the result table.
v Leave the cursor open.
When the stored procedure ends, DB2 returns the rows in the query result set to
the client.
Example: Declaring a cursor to return a result set: Suppose you want to return
a result set that contains entries for all employees in department D11. First, declare
a cursor that describes this subset of employees:
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT * FROM DSN8810.EMP
WHERE WORKDEPT=’D11’;
DB2 returns the result set and the name of the SQL cursor for the stored procedure
to the client.
Use meaningful cursor names for returning result sets: The name of the cursor
that is used to return result sets is made available to the client application through
extensions to the DESCRIBE statement. See “Writing a DB2 UDB for z/OS client
program or SQL procedure to receive result sets” on page 648 for more information.
Use cursor names that are meaningful to the DRDA client application, especially
when the stored procedure returns multiple result sets.
Objects from which you can return result sets: You can use any of these objects
in the SELECT statement that is associated with the cursor for a result set:
v Tables, synonyms, views, created temporary tables, declared temporary tables,
and aliases defined at the local DB2 subsystem
v Tables, synonyms, views, created temporary tables, and aliases defined at
remote DB2 UDB for z/OS systems that are accessible through DB2 private
protocol access
Returning a subset of rows to the client: If you execute FETCH statements with
a result set cursor, DB2 does not return the fetched rows to the client program. For
example, if you declare a cursor WITH RETURN and then execute the statements
OPEN, FETCH, and FETCH, the client receives data beginning with the third row in
the result set. If the result set cursor is scrollable and you fetch rows with it, you
need to position the cursor before the first row of the result table after you fetch the
rows and before the stored procedure ends.
Using a temporary table to return result sets: You can use a created temporary
table or declared temporary table to return result sets from a stored procedure. This
capability can be used to return nonrelational data to a DRDA client.
For example, you can access IMS data from a stored procedure in the following
way:
v Use APPC/MVS to issue an IMS transaction.
v Receive the IMS reply message, which contains data that should be returned to
the client.
v Insert the data from the reply message into a temporary table.
v Open a cursor against the temporary table. When the stored procedure ends, the
rows from the temporary table are returned to the client.
The calling application can use a DB2 package or plan to execute the CALL
statement. The stored procedure must use a DB2 package as Figure 177 shows.
The server program might use more than one package. These packages come from
two sources:
v A DBRM that you bind several times into several versions of the same package,
all with the same package name, which can then reside in different collections.
Your stored procedure can switch from one version to another by using the
statement SET CURRENT PACKAGESET.
v A package associated with another program that contains SQL statements that
the stored procedure calls.
Unlike other stored procedures, you do not prepare REXX stored procedures for
execution. REXX stored procedures run using one of four packages that are bound
| during the installation of DB2 REXX Language Support. The current isolation level
| at which the stored procedure runs depends on the package that DB2 uses when
| the stored procedure runs:
Package name Isolation level
DSNREXRR Repeatable read (RR)
DSNREXRS Read stability (RS)
DSNREXCS Cursor stability (CS)
DSNREXUR Uncommitted read (UR)
Figure 179 on page 595 shows an example of a REXX stored procedure that
executes DB2 commands. The stored procedure performs the following actions:
v Receives one input parameter, which contains a DB2 command.
v Calls the IFI COMMAND function to execute the command.
v Extracts the command result messages from the IFI return area and places the
messages in a created temporary table. Each row of the temporary table
contains a sequence number and the text of one message.
v Opens a cursor to return a result set that contains the command result
messages.
v Returns the unformatted contents of the IFI return area in an output parameter.
Figure 179 on page 595 shows the COMMAND stored procedure that executes
DB2 commands.
Creating an SQL procedure involves writing the source statements for the SQL
procedure, creating the executable form of the SQL procedure, and defining the
SQL procedure to DB2. There are two ways to create an SQL procedure:
v Use the IBM DB2 Development Center product to specify the source statements
for the SQL procedure, define the SQL procedure to DB2, and prepare the SQL
procedure for execution.
This section discusses how to write a and prepare an SQL procedure. The following
topics are included:
v “Comparison of an SQL procedure and an external procedure”
v “Statements that you can include in a procedure body” on page 600
v “Terminating statements in an SQL procedure” on page 602
v “Handling SQL conditions in an SQL procedure” on page 603
v “Examples of SQL procedures” on page 607
v “Preparing an SQL procedure” on page 609
For information about the syntax of the CREATE PROCEDURE statement and the
procedure body, see DB2 SQL Reference.
An external stored procedure definition and an SQL procedure definition specify the
following common information:
v The procedure name.
v Input and output parameter attributes.
v The language in which the procedure is written. For an SQL procedure, the
language is SQL.
v Information that will be used when the procedure is called, such as run-time
options, length of time that the procedure can run, and whether the procedure
returns result sets.
An external stored procedure and an SQL procedure share the same rules for the
use of COMMIT and ROLLBACK statements in a procedure. For information about
the restrictions for the use of these statements and their effect, see “Using COMMIT
and ROLLBACK statements in a stored procedure” on page 586.
An external stored procedure and an SQL stored procedure differ in the way that
they handle errors.
v For an external stored procedure, DB2 does not return SQL conditions in the
| SQLCA to the workstation application. If you use PARAMETER STYLE SQL
when you define an external procedure, you can set SQLSTATE to indicate an
error before the procedure ends. For valid SQLSTATE values, see “Passing
parameter values to and from a user-defined function” on page 303.
v For an SQL stored procedure, DB2 automatically returns SQL conditions in the
SQLCA when the procedure does not include a RETURN statement or a handler.
For information about the various ways to handle errors in an SQL stored
procedure, see “Handling SQL conditions in an SQL procedure” on page 603.
An external stored procedure and an SQL procedure also differ in the way that they
specify the code for the stored procedure. An external stored procedure definition
specifies the name of the stored procedure program. An SQL procedure definition
contains the source code for the stored procedure.
For an external stored procedure, you define the stored procedure to DB2 by
executing the CREATE PROCEDURE statement. You change the definition of the
Figure 180 shows a definition for an external stored procedure that is written in
COBOL. The stored procedure program, which updates employee salaries, is called
UPDSAL.
See the discussion of the procedure body in DB2 SQL Reference for detailed
descriptions and syntax of each of these statements.
The general form of a declaration for an SQL variable that you use as a result set
locator is:
DECLARE SQL-variable-name data-type RESULT_SET_LOCATOR VARYING;
You can perform any operations on SQL variables that you can perform on host
variables in SQL statements.
Qualifying SQL variable names and other object names is a good way to avoid
ambiguity. Use the following guidelines to determine when to qualify variable
names:
v When you use an SQL procedure parameter in the procedure body, qualify the
parameter name with the procedure name.
v Specify a label for each compound statement, and qualify SQL variable names in
the compound statement with that label.
v Qualify column names with the associated table or view names.
Recommendation: Because the way that DB2 determines the qualifier for
unqualified names might change in the future, qualify all SQL variable names to
avoid changing your code later.
In general, the way that a handler works is that when an error occurs that matches
condition, the SQL-procedure-statement executes. When the SQL-procedure-
statement completes, DB2 performs the action that is indicated by handler-type.
Types of handlers: The handler type determines what happens after the
completion of the SQL-procedure-statement. You can declare the handler type to be
either CONTINUE or EXIT:
CONTINUE
Specifies that after SQL-procedure-statement completes, execution continues
with the statement after the statement that caused the error.
EXIT
Specifies that after SQL-procedure-statement completes, execution continues at
the end of the compound statement that contains the handler.
Example: CONTINUE handler: This handler sets flag at_end when no more rows
satisfy a query. The handler then causes execution to continue after the statement
that returned no rows.
DECLARE CONTINUE HANDLER FOR NOT FOUND SET at_end=1;
Before you can reference SQLCODE or SQLSTATE values in a handler, you must
declare the SQLCODE and SQLSTATE as SQL variables. The definitions are:
DECLARE SQLCODE INTEGER;
DECLARE SQLSTATE CHAR(5);
If you want to pass the SQLCODE or SQLSTATE values to the caller, your SQL
procedure definition needs to include output parameters for those values. After an
error occurs, and before control returns to the caller, you can assign the value of
SQLCODE or SQLSTATE to the corresponding output parameter.
Because the IF statement is true, the SQLCODE value is reset to 0, and you lose
| the previous SQLCODE value.
| Example: Using GET DIAGNOSTICS to retrieve message text: Suppose that you
| create an SQL procedure, named divide1, that computes the result of the division of
| two integers. You include GET DIAGNOSTICS to return the text of the division error
| message as an output parameter:
| Example: Using GET DIAGNOSTICS to retrieve the return status: Suppose that
| you create an SQL procedure, named TESTIT, that calls another SQL procedure,
| named TRYIT. The TRYIT procedure returns a status value, and the TESTIT
| procedure retrieves that value with the RETURN_STATUS item of GET
| DIAGNOSTICS:
| CREATE PROCEDURE TESTIT ()
| LANGUAGE SQL
| A1:BEGIN
| DECLARE RETVAL INTEGER DEFAULT 0;
| ...
| CALL TRYIT;
| GET DIAGNOSTICS RETVAL = RETURN_STATUS;
| IF RETVAL <> 0 THEN
| ...
| LEAVE A1;
| ELSE
| ...
| END IF;
| END A1
| Example: Using SIGNAL to set message text: Suppose that you have an SQL
| procedure for an order system that signals an application error when a customer
| number is not known to the application. The ORDERS table has a foreign key to
| the CUSTOMERS table, which requires that the CUSTNO exist in the
| CUSTOMERS table before an order can be inserted:
| CREATE PROCEDURE submit_order
| (IN ONUM INTEGER, IN PNUM INTEGER,
| IN CNUM INTEGER, IN QNUM INTEGER)
| LANGUAGE SQL
| MODIFIES SQL DATA
| BEGIN
| DECLARE EXIT HANDLER FOR SQLSTATE VALUE ’23503’
| SIGNAL SQLSTATE ’75002’
| SET MESSAGE_TEXT = ’Customer number is not known’;
| INSERT INTO ORDERS (ORDERNO, PARTNO, CUSTNO, QUANTITY)
| VALUES (ONUM, PNUM, CNUM, QNUM);
| END
| In this example, the SIGNAL statement is in the handler. However, you can use the
| SIGNAL statement to invoke a handler when a condition occurs that will result in an
| error; see the example in “Using the RESIGNAL statement in a handler.”
| Example: Using RESIGNAL to set an SQLSTATE value: Suppose that you create
| an SQL procedure, named divide2, that computes the result of the division of two
| integers. You include SIGNAL to invoke the handler with an overflow condition that
| is caused by a zero divisor, and you include RESIGNAL to set a different
| SQLSTATE value for that overflow condition:
| CREATE PROCEDURE divide2
| (IN numerator INTEGER, IN denominator INTEGER,
| OUT divide_result INTEGER)
| LANGUAGE SQL
| BEGIN
| DECLARE overflow CONDITION FOR SQLSTATE ’22003’;
| DECLARE CONTINUE HANDLER FOR overflow
| RESIGNAL SQLSTATE ’22375’;
| IF denominator = 0 THEN
| SIGNAL overflow;
If any SQL statement in the procedure body receives a negative SQLCODE, the
SQLEXCEPTION handler receives control. This handler sets output parameter
DEPTSALARY to NULL and ends execution of the SQL procedure. When this
handler is invoked, the SQLCODE and SQLSTATE are set to 0.
CREATE PROCEDURE RETURNDEPTSALARY
(IN DEPTNUMBER CHAR(3),
OUT DEPTSALARY DECIMAL(15,2),
OUT DEPTBONUSCNT INT)
LANGUAGE SQL
READS SQL DATA
P1: BEGIN
DECLARE EMPLOYEE_SALARY DECIMAL(9,2);
DECLARE EMPLOYEE_BONUS DECIMAL(9,2);
DECLARE TOTAL_SALARY DECIMAL(15,2) DEFAULT 0;
DECLARE BONUS_CNT INT DEFAULT 0;
DECLARE END_TABLE INT DEFAULT 0;
DECLARE C1 CURSOR FOR
SELECT SALARY, BONUS FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = DEPTNUMBER;
DECLARE CONTINUE HANDLER FOR NOT FOUND
SET END_TABLE = 1;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
SET DEPTSALARY = NULL;
OPEN C1;
FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS;
WHILE END_TABLE = 0 DO
SET TOTAL_SALARY = TOTAL_SALARY + EMPLOYEE_SALARY + EMPLOYEE_BONUS;
IF EMPLOYEE_BONUS > 0 THEN
SET BONUS_CNT = BONUS_CNT + 1;
END IF;
FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS;
END WHILE;
CLOSE C1;
SET DEPTSALARY = TOTAL_SALARY;
SET DEPTBONUSCNT = BONUS_CNT;
END P1
The three methods available for preparing an SQL procedure to run are:
v Using IBM DB2 Development Center, which runs on Windows NT, Windows 95,
Windows 98, Windows 2000, and AIX.
v Using the DB2 UDB for z/OS SQL procedure processor. See “Using the DB2
UDB for z/OS SQL procedure processor to prepare an SQL procedure” on page
610.
v Using JCL. See “Using JCL to prepare an SQL procedure” on page 619.
To run an SQL procedure, you must call it from a client program, using the SQL
CALL statement. See the description of the CALL statement in Chapter 2 of DB2
SQL Reference for more information.
Environment for calling and running DSNTPSMP: You can invoke DSNTPSMP
only through an SQL CALL statement in an application program or through IBM
DB2 Development Center.
Before you can run DSNTPSMP, you need to perform the following steps to set up
the DSNTPSMP environment:
1. Install DB2 UDB for z/OS REXX Language Support feature.
Contact your IBM service representative for more information.
2. If you plan to call DSNTPSMP directly, write and prepare an application program
that executes an SQL CALL statement for DSNTPSMP.
See “Invoking DSNTPSMP in an application program” on page 613 for more
information.
If you plan to invoke DSNTPSMP through the IBM DB2 Development Center,
see the following URL for information about installing and using the IBM DB2
Development Center.
http://www.ibm.com/software/data/db2/os390/spb
3. Set up a WLM environment in which to run DSNTPSMP. See Part 5 (Volume 2)
of DB2 Administration Guide for general information about setting up WLM
application environments for stored procedures and “Setting up a WLM
application environment for DSNTPSMP” for specific information for
DSNTPSMP.
| Figure 182 on page 611 shows sample JCL for a startup procedure for the address
| space in which DSNTPSMP runs.
|
Figure 182. Startup procedure for a WLM address space in which DSNTPSMP runs
| In addition, the authorizations must include any privileges required for the SQL
| statements that are contained within the SQL procedure-body. These privileges
| must be associated with the OWNER authorization-id that is specified in your bind
| options. The default owner is the user that is invoking DSNTPSMP.
| Figure 183 and Figure 184 shows the syntax of invoking DSNTPSMP through the
| SQL CALL statement:
|
’ option ’
| The following input parameters are required for the BUILD function:
| SQL-procedure name
| SQL-procedure-source or source-data-set-name
| If you choose the BUILD function, and an SQL procedure with name
| SQL-procedure-name already exists, DSNTPSMP issues an error message
| and terminates.
| BUILD_DEBUG
| Creates the following objects for an SQL procedure and includes the
| preparation necessary to debug the SQL procedure with the SQL
| Debugger:
| v A DBRM, in the data set that DD name SQLDBRM points to
| v A load module, in the data set that DD name SQLLMOD points to
| v The C language source code for the SQL procedure, in the data set that
| DD name SQLCSRC points to
| v The stored procedure package
| v The stored procedure definition
| Result set that DSNTPSMP returns: DSNTPSMP returns one result set that
| contains messages and listings. You can write your client program to retrieve
| information from this result set. This technique is shown in “Writing a DB2 UDB for
| z/OS client program or SQL procedure to receive result sets” on page 648.
Rows in the message result set are ordered by processing step, ddname, and
| sequence number.
| If you want to recreate an existing SQL procedure for debugging with the SQL
| Debugger, use the following CALL statement, which includes the REBUILD_DEBUG
| function:
| EXEC SQL CALL SYSPROC.DSNTPSMP(’REBUILD_DEBUG’,’MYSCHEMA.SQLPROC’,’’,
| ’VALIDATE(BIND)’,
| ’SOURCE,LIST,LONGNAME,RENT’,
| ’SOURCE,XREF,STDSQL(NO)’,
| ’’,
| ’AMODE=31,RMODE=ANY,MAP,RENT’,
| ’’,’DSN810.SDSNSAMP(PROCSRC)’,’’,’’,
| :returnval);
where :EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, and :CODE are host variables
that you have declared earlier in your application program. Your CALL statement
might vary from the preceding statement in the following ways:
v Instead of passing each of the employee and project parameters separately, you
could pass them together as a host structure. For example, assume that you
define a host structure like this:
struct {
char EMP[7];
char PRJ[7];
short ACT;
short EMT;
char EMS[11];
char EME[11];
} empstruc;
where :IEMP, :IPRJ, :IACT, :IEMT, :IEMS, :IEME, :ITYPE, and :ICODE are
indicator variables for the parameters.
v You might pass integer or character string constants or the null value to the
stored procedure, as in this example:
EXEC SQL CALL A (’000130’, ’IF1000’, 90, 1.0, NULL, ’1982-10-01’,
:TYPE, :CODE);
v You might use a host variable for the name of the stored procedure:
EXEC SQL CALL :procnm (:EMP, :PRJ, :ACT, :EMT, :EMS, :EME,
:TYPE, :CODE);
Assume that the stored procedure name is A. The host variable procnm is a
character variable of length 255 or less that contains the value ’A’. You should
use this technique if you do not know in advance the name of the stored
procedure, but you do know the parameter list convention.
v If you prefer to pass your parameters in a single structure, rather than as
separate host variables, you might use this form:
EXEC SQL CALL A USING DESCRIPTOR :sqlda;
sqlda is the name of an SQLDA.
One advantage of using this form is that you can change the encoding scheme
of the stored procedure parameter values. For example, if the subsystem on
which the stored procedure runs has an EBCDIC encoding scheme, and you
want to retrieve data in ASCII CCSID 437, you can specify the desired CCSIDs
for the output parameters in the SQLVAR fields of the SQLDA.
This technique for overriding the CCSIDs of parameters is the same as the
technique for overriding the CCSIDs of variables, which is described in
“Changing the CCSID for retrieved data” on page 561. When you use this
Each of the preceding CALL statement examples uses an SQLDA. If you do not
explicitly provide an SQLDA, the precompiler generates the SQLDA based on the
variables in the parameter list.
The authorizations you need depend on whether the form of the CALL statement is
CALL literal or CALL :host-variable.
For more information, see the description of the CALL statement in Chapter 5 of
DB2 SQL Reference.
Linkage conventions
When an application executes the CALL statement, DB2 builds a parameter list for
the stored procedure, using the parameters and values provided in the statement.
DB2 obtains information about parameters from the stored procedure definition you
create when you execute CREATE PROCEDURE. Parameters are defined as one
of these types:
IN Input-only parameters, which provide values to the stored
procedure
OUT Output-only parameters, which return values from the stored
procedure to the calling program
INOUT Input/output parameters, which provide values to or return values
from the stored procedure.
Initializing output parameters: For a stored procedure that runs locally, you do not
need to initialize the values of output parameters before you call the stored
procedure. However, when you call a stored procedure at a remote location, the
local DB2 cannot determine whether the parameters are input (IN) or output (OUT
or INOUT) parameters. Therefore, you must initialize the values of all output
parameters before you call a stored procedure at a remote location.
It is recommended that you initialize the length of LOB output parameters to zero.
Doing so can improve your performance.
DB2 supports three parameter list conventions. DB2 chooses the parameter list
convention based on the value of the PARAMETER STYLE parameter in the stored
| procedure definition: GENERAL, GENERAL WITH NULLS, or SQL.
v Use GENERAL when you do not want the calling program to pass null values for
input parameters (IN or INOUT) to the stored procedure. The stored procedure
must contain a variable declaration for each parameter passed in the CALL
statement.
Figure 185 shows the structure of the parameter list for PARAMETER STYLE
GENERAL.
v Use GENERAL WITH NULLS to allow the calling program to supply a null value
for any parameter passed to the stored procedure. For the GENERAL WITH
NULLS linkage convention, the stored procedure must do the following tasks:
– Declare a variable for each parameter passed in the CALL statement.
– Declare a null indicator structure containing an indicator variable for each
parameter.
– On entry, examine all indicator variables associated with input parameters to
determine which parameters contain null values.
– On exit, assign values to all indicator variables associated with output
variables. An indicator variable for an output variable that returns a null value
to the caller must be assigned a negative number. Otherwise, the indicator
variable must be assigned the value 0.
In the CALL statement, follow each parameter with its indicator variable, using
one of the following forms:
host-variable :indicator-variable
or
host-variable INDICATOR :indicator-variable.
Figure 186. Parameter convention GENERAL WITH NULLS for a stored procedure
| v Like GENERAL WITH NULLS, option SQL lets you supply a null value for any
parameter that is passed to the stored procedure. In addition, DB2 passes input
and output parameters to the stored procedure that contain this information:
| – The SQLSTATE that is to be returned to DB2. This is a CHAR(5) parameter
| that represents the SQLSTATE that is passed in to the program from the
| database manager. The initial value is set to ‘00000’. Although the SQLSTATE
| is usually not set by the program, it can be set as the result SQLSTATE that is
| used to return an error or a warning. Returned values that start with anything
| other than ‘00’, ‘01’, or ‘02’ are error conditions. Refer to DB2 Messages and
| Codes for more information about the SQLSTATE values that an application
| can generate.
– The qualified name of the stored procedure. This is a VARCHAR(27) value.
– The specific name of the stored procedure. The specific name is a
VARCHAR(18) value that is the same as the unqualified name.
– The SQL diagnostic string that is to be returned to DB2. This is a
VARCHAR(70) value. Use this area to pass descriptive information about an
error or warning to the caller.
| SQL is not a valid linkage convention for a REXX language stored procedure.
Figure 187 on page 626 shows the structure of the parameter list for
| PARAMETER STYLE SQL.
For these examples, assume that a COBOL application has the following parameter
declarations and CALL statement:
************************************************************
* PARAMETERS FOR THE SQL STATEMENT CALL *
************************************************************
01 V1 PIC S9(9) USAGE COMP.
01 .V2 PIC X(9).
.
.
EXEC SQL CALL A (:V1, :V2) END-EXEC.
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
The following figures show how an assembler, C, COBOL, and PL/I stored
procedure uses the GENERAL linkage convention to receive parameters.
*******************************************************************
* CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES *
* THE GENERAL LINKAGE CONVENTION. *
*******************************************************************
A CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS
USING PROGAREA,R13
*******************************************************************
* BRING UP THE LANGUAGE ENVIRONMENT. *
*******************************************************************
.
.
.
*******************************************************************
* GET THE PASSED PARAMETER VALUES. THE GENERAL LINKAGE CONVENTION*
* FOLLOWS THE STANDARD ASSEMBLER LINKAGE CONVENTION: *
* ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS TO THE *
* PARAMETERS. *
*******************************************************************
L R7,0(R1) GET POINTER TO V1
MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1
.
.
.
L R7,4(R1) GET POINTER TO V2
MVC 0(9,R7),LOCV2 MOVE A VALUE INTO OUTPUT VAR V2
.
.
.
CEETERM RC=0
*******************************************************************
* VARIABLE DECLARATIONS AND EQUATES *
*******************************************************************
R1 EQU 1 REGISTER 1
R7 EQU 7 REGISTER 7
PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK
LTORG , PLACE LITERAL POOL HERE
PROGAREA DSECT
ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART
LOCV1 DS F LOCAL COPY OF PARAMETER V1
LOCV2 DS CL9 LOCAL COPY OF PARAMETER V2
.
.
.
PROGSIZE EQU *-PROGAREA
CEEDSA , MAPPING OF THE DYNAMIC SAVE AREA
CEECAA , MAPPING OF THE COMMON ANCHOR AREA
END A
Figure 189 shows how a stored procedure in the C language receives these
parameters.
Figure 190 shows how a stored procedure in the COBOL language receives these
parameters.
Figure 191 shows how a stored procedure in the PL/I language receives these
parameters.
*PROCESS SYSTEM(MVS);
A: PROC(V1, V2) OPTIONS(MAIN NOEXECOPS REENTRANT);
/***************************************************************/
/* Code for a PL/I language stored procedure that uses the */
/* GENERAL linkage convention. */
/***************************************************************/
/***************************************************************/
/* Indicate on the PROCEDURE statement that two parameters */
/* were passed by the SQL statement CALL. Then declare the */
/* parameters in the following section. */
/***************************************************************/
DCL V1 BIN FIXED(31),
V2 CHAR(9);
.
.
.
V2 = ’123456789’; /* Assign a value to output variable V2 */
For these examples, assume that a C application has the following parameter
declarations and CALL statement:
/************************************************************/
/* Parameters for the SQL statement CALL */
/************************************************************/
long int v1;
char v2[10]; /* Allow an extra byte for */
/* the null terminator */
/************************************************************/
/* Indicator structure */
/************************************************************/
struct indicators {
short int ind1;
short int ind2;
} indstruc;
.
.
.
indstruc.ind1 = 0; /* Remember to initialize the */
/* input parameter’s indicator*/
/* variable before executing */
/* the CALL statement */
EXEC SQL CALL B (:v1 :indstruc.ind1, :v2 :indstruc.ind2);
.
.
.
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
The following figures show how an assembler, C, COBOL, or PL/I stored procedure
uses the GENERAL WITH NULLS linkage convention to receive parameters.
Figure 192 shows how a stored procedure in assembler language receives these
parameters.
Figure 193 shows how a stored procedure in the C language receives these
parameters.
Figure 194 shows how a stored procedure in the COBOL language receives these
parameters.
Figure 195 shows how a stored procedure in the PL/I language receives these
parameters.
For these examples, assume that a C application has the following parameter
declarations and CALL statement:
/************************************************************/
/* Parameters for the SQL statement CALL */
/************************************************************/
long int v1;
char v2[10]; /* Allow an extra byte for */
/* the null terminator */
/************************************************************/
/* Indicator variables */
/************************************************************/
short int ind1;
short int ind2;
.
.
.
ind1 = 0; /* Remember to initialize the */
/* input parameter’s indicator*/
/* variable before executing */
/* the CALL statement */
EXEC SQL CALL B (:v1 :ind1, :v2 :ind2);
.
.
.
In the CREATE PROCEDURE statement, the parameters are defined like this:
The following figures show how an assembler, C, COBOL, or PL/I stored procedure
| uses the SQL linkage convention to receive parameters.
Figure 196 shows how a stored procedure in assembler language receives these
parameters.
*******************************************************************
* CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES *
| * THE SQL LINKAGE CONVENTION. *
*******************************************************************
B CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS
USING PROGAREA,R13
*******************************************************************
* BRING UP THE LANGUAGE ENVIRONMENT. *
*******************************************************************
..
.
*******************************************************************
| * GET THE PASSED PARAMETER VALUES. THE SQL LINKAGE *
* CONVENTION IS AS FOLLOWS: *
* ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS. IF N *
* PARAMETERS ARE PASSED, THERE ARE 2N+4 POINTERS. THE FIRST *
* N POINTERS ARE THE ADDRESSES OF THE N PARAMETERS, JUST AS *
* WITH THE GENERAL LINKAGE CONVENTION. THE NEXT N POINTERS ARE *
* THE ADDRESSES OF THE INDICATOR VARIABLE VALUES. THE LAST *
* 4 POINTERS (5, IF DBINFO IS PASSED) ARE THE ADDRESSES OF *
* INFORMATION ABOUT THE STORED PROCEDURE ENVIRONMENT AND *
* EXECUTION RESULTS. *
*******************************************************************
L R7,0(R1) GET POINTER TO V1
MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1
L R7,8(R1) GET POINTER TO 1ST INDICATOR VARIABLE
MVC LOCI1(2),0(R7) MOVE VALUE INTO LOCAL STORAGE
L R7,20(R1) GET POINTER TO STORED PROCEDURE NAME
MVC LOCSPNM(20),0(R7) MOVE VALUE INTO LOCAL STORAGE
L R7,24(R1) GET POINTER TO DBINFO
MVC LOCDBINF(DBINFLN),0(R7)
* MOVE VALUE INTO LOCAL STORAGE
LH R7,LOCI1 GET INDICATOR VARIABLE FOR V1
LTR R7,R7 CHECK IF IT IS NEGATIVE
BM NULLIN IF SO, V1 IS NULL
..
.
L R7,4(R1) GET POINTER TO V2
MVC 0(9,R7),LOCV2 MOVE A VALUE INTO OUTPUT VAR V2
L R7,12(R1) GET POINTER TO INDICATOR VAR 2
MVC 0(2,R7),=H’0’ MOVE ZERO TO V2’S INDICATOR VAR
L R7,16(R1) GET POINTER TO SQLSTATE
MVC 0(5,R7),=CL5’xxxxx’ MOVE xxxxx TO SQLSTATE
..
.
CEETERM RC=0
Figure 197 shows how a stored procedure written as a main program in the C
language receives these parameters.
main(argc,argv)
int argc;
char *argv[];
{
int parm1;
short int ind1;
char p_proc[28];
char p_spec[19];
/***************************************************/
/* Assume that the SQL CALL statment included */
/* 3 input/output parameters in the parameter list.*/
/* The argv vector will contain these entries: */
/* argv[0] 1 contains load module */
/* argv[1-3] 3 input/output parms */
/* argv[4-6] 3 null indicators */
/* argv[7] 1 SQLSTATE variable */
/* argv[8] 1 qualified proc name */
/* argv[9] 1 specific proc name */
/* argv[10] 1 diagnostic string */
/* argv[11] + 1 dbinfo */
/* ------ */
/* 12 for the argc variable */
/***************************************************/
if argc<>12 {
..
.
/* We end up here when invoked with wrong number of parms */
}
| Figure 197. An example of SQL linkage for a C stored procedure written as a main program
(Part 1 of 2)
| Figure 197. An example of SQL linkage for a C stored procedure written as a main program
(Part 2 of 2)
strcpy(l_p2,parm2);
l_ind1 = *p_ind1;
l_ind1 = *p_ind2;
strcpy(l_sqlstate,p_sqlstate);
strcpy(l_proc,p_proc);
strcpy(l_spec,p_spec);
strcpy(l_diag,p_diag);
. memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo));
.
.
}
| Figure 198. An example of SQL linkage for a C stored procedure written as a subprogram
Figure 199 shows how a stored procedure in the COBOL language receives these
parameters.
Figure 200 shows how a stored procedure in the PL/I language receives these
parameters.
*PROCESS SYSTEM(MVS);
MYMAIN: PROC(PARM1, PARM2, ...,
P_IND1, P_IND2, ...,
P_SQLSTATE, P_PROC, P_SPEC, P_DIAG, SP_DBINFO)
OPTIONS(MAIN NOEXECOPS REENTRANT);
This option is not applicable to other platforms, however. If you plan to use a C
stored procedure on other platforms besides z/OS, use conditional compilation, as
shown in Figure 201, to include this option only when you compile on z/OS.
-- or --
#ifndef WKSTN
#pragma runopts(PLIST(OS))
#endif
For information about specifying PL/I compile-time and run-time options, see IBM
Enterprise PL/I for z/OS and OS/390 Programming Guide.
For example, suppose that a stored procedure that is defined with the GENERAL
linkage convention takes one integer input parameter and one character output
parameter of length 6000. You do not want to pass the 6000 byte storage area to
the stored procedure. A PL/I program containing these statements passes only two
bytes to the stored procedure for the output variable and receives all 6000 bytes
from the stored procedure:
DCL INTVAR BIN FIXED(31); /* This is the input variable */
DCL BIGVAR(6000); /* This is the output variable */
DCL. I1 BIN FIXED(15); /* This is an indicator variable */
.
.
I1 = -1; /* Setting I1 to -1 causes only */
/* a two byte area representing */
/* I1 to be passed to the */
/* stored procedure, instead of */
/* the 6000 byte area for BIGVAR*/
EXEC SQL CALL PROCX(:INTVAR, :BIGVAR INDICATOR :I1);
For REXX: See “Calling a stored procedure from a REXX Procedure” on page 654
for information about DB2 data types and corresponding parameter formats.
For languages other than REXX: For all data types except LOBs, ROWIDs, and
locators, see the tables listed in Table 83 for the host data types that are compatible
with the data types in the stored procedure definition.
Table 83. Listing of tables of compatible data types
Language Compatible data types table
Assembler Table 12 on page 138
C Table 14 on page 162
COBOL Table 17 on page 195
PL/I Table 21 on page 226
For LOBs, ROWIDs, and locators, Table 84 shows compatible declarations for the
assembler language.
Table 84. Compatible assembler language declarations for LOBs, ROWIDs, and locators
SQL data type in definition Assembler declaration
TABLE LOCATOR DS FL4
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
CLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
DBCLOB(n) If m (=2*n) <= 65534:
var DS 0FL4
var_length DS FL4
var_data DS CLm
If m > 65534:
var DS 0FL4
var_length DS FL4
var_data DS CL65534
ORG var_data+(m-65534)
ROWID DS HL2,CL40
For LOBs, ROWIDs, and locators, Table 86 shows compatible declarations for
COBOL.
Table 86. Compatible COBOL declarations for LOBs, ROWIDs, and locators
SQL data type in definition COBOL declaration
TABLE LOCATOR 01 var PIC S9(9) USAGE IS BINARY.
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 32767:
01 var.
49 var-LENGTH PIC 9(9)
USAGE COMP.
49 var-DATA PIC X(n).
If n > 32767:
01 var.
02 var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC X(32767).
49 FILLER
.. PIC X(32767).
.
49 FILLER
PIC X(mod(n,32767)).
If n > 32767:
01 var.
02 var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC X(32767).
49 FILLER
.. PIC X(32767).
.
49 FILLER
PIC X(mod(n,32767)).
DBCLOB(n) If n <= 32767:
01 var.
49 var-LENGTH PIC 9(9)
USAGE COMP.
49 var-DATA PIC G(n)
USAGE DISPLAY-1.
If n > 32767:
01 var.
02 var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC G(32767)
USAGE DISPLAY-1.
49 FILLER
PIC G(32767).
.. USAGE DISPLAY-1.
.
49 FILLER
PIC G(mod(n,32767))
USAGE DISPLAY-1.
ROWID 01 var.
49 var-LEN PIC 9(4)
USAGE COMP.
49 var-DATA PIC X(40).
For LOBs, ROWIDs, and locators, Table 87 shows compatible declarations for PL/I.
Table 87. Compatible PL/I declarations for LOBs, ROWIDs, and locators
SQL data type in definition PL/I
TABLE LOCATOR BIN FIXED(31)
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
CLOB(n) If n <= 32767:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
CHAR(n);
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
DBCLOB(n) If n <= 16383:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
GRAPHIC(n);
If n > 16383:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
GRAPHIC(16383),
03 var_DATA2
GRAPHIC(mod(n,16383));
ROWID CHAR(40) VAR
You do not need to connect to the remote location when you execute these
statements:
v DESCRIBE PROCEDURE
v ASSOCIATE LOCATORS
v ALLOCATE CURSOR
v DESCRIBE CURSOR
v FETCH
v CLOSE
For the syntax of result set locators in each host language, see Chapter 9,
“Embedding SQL statements in host languages,” on page 129. For the syntax of
result set locators in SQL procedures, see Chapter 6 of DB2 SQL Reference. For
the syntax of the ASSOCIATE LOCATORS, DESCRIBE PROCEDURE, ALLOCATE
CURSOR, and DESCRIBE CURSOR statements, see Chapter 5 of DB2 SQL
Reference.
Figure 202 on page 651 and Figure 203 on page 652 show C language code that
accomplishes each of these steps. Coding for other languages is similar. For a
more complete example of a C language program that receives result sets, see
“Examples of using stored procedures” on page 975.
Figure 202 on page 651 demonstrates how you receive result sets when you know
how many result sets are returned and what is in each result set.
Figure 203 on page 652 demonstrates how you receive result sets when you do not
know how many result sets are returned or what is in each result set.
/*************************************************************/
/* Call stored procedure P2. */
/* Check for SQLCODE +466, which indicates that result sets */
/* were returned. */
/*************************************************************/
EXEC SQL CALL P2(:parm1, :parm2, ...);
if(SQLCODE==+466)
{
/*************************************************************/
/* Determine how many result sets P2 returned, using the */
/* statement DESCRIBE PROCEDURE. :proc_da is an SQLDA */
/* with enough storage to accommodate up to three SQLVAR */
/* entries. */
/*************************************************************/
EXEC SQL DESCRIBE PROCEDURE P2 INTO :proc_da;
.
.
.
/*************************************************************/
/* Now that you know how many result sets were returned, */
/* establish a link between each result set and its */
/* locator using the ASSOCIATE LOCATORS. For this example, */
/* we assume that three result sets are returned. */
/*************************************************************/
EXEC SQL ASSOCIATE LOCATORS (:loc1, :loc2, :loc3) WITH PROCEDURE P2;
.
.
.
/*************************************************************/
/* Associate a cursor with each result set. */
/*************************************************************/
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1;
EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :loc2;
EXEC SQL ALLOCATE C3 CURSOR FOR RESULT SET :loc3;
Figure 204 on page 654 demonstrates how you can use an SQL procedure to
receive result sets.
Figure 205 on page 656 demonstrates how a REXX procedure calls the stored
procedure in Figure 179 on page 595. The REXX procedure performs the following
actions:
v Connects to the DB2 subsystem that was specified by the REXX procedure
invoker.
v Calls the stored procedure to execute a DB2 command that was specified by the
REXX procedure invoker.
v Retrieves rows from a result set that contains the command output messages.
Figure 205. Example of a REXX procedure that calls a stored procedure (Part 1 of 3)
Figure 205. Example of a REXX procedure that calls a stored procedure (Part 2 of 3)
Figure 205. Example of a REXX procedure that calls a stored procedure (Part 3 of 3)
Before you can call a stored procedure from your embedded SQL application, you
must bind a package for the client program on the remote system. You can use the
remote DRDA bind capability on your DRDA client system to bind the package to
the remote system.
If you have packages that contain SQL CALL statements that you bound before
DB2 Version 6, you can get better performance from those packages if you rebind
them in DB2 Version 6 or later. Rebinding lets DB2 obtain some information from
the catalog at bind time that it obtained at run time before Version 6. Therefore,
after you rebind your packages, they run more efficiently because DB2 can do
fewer catalog searches at run time.
For an ODBC or CLI application, the DB2 packages and plan associated with the
ODBC driver must be bound to DB2 before you can run your application. For
A z/OS client can bind the DBRM to a remote server by specifying a location name
on the command BIND PACKAGE. For example, suppose you want a client
program to call a stored procedure at location LOCA. You precompile the program
to produce DBRM A. Then you can use the following command to bind DBRM A
into package collection COLLA at location LOCA:
BIND PACKAGE (LOCA.COLLA) MEMBER(A)
The plan for the package resides only at the client system.
DB2 runs stored procedures under the DB2 thread of the calling application, making
the stored procedures part of the caller’s unit of work.
If both the client and server application environments support two-phase commit,
the coordinator controls updates between the application, the server, and the stored
procedures. If either side does not support two-phase commit, updates will fail.
DB2 uses schema names from the CURRENT PATH special register for CALL
statements of the following form:
CALL host-variable
2. When DB2 finds a stored procedure definition, DB2 executes that stored
procedure if the following conditions are true:
v The caller is authorized to execute the stored procedure.
v The stored procedure has the same number of parameters as in the CALL
statement.
If both conditions are not true, DB2 continues to go through the list of schemas
until it finds a stored procedure that meets both conditions or reaches the end of
the list.
3. If DB2 cannot find a suitable stored procedure, it returns an SQL error code for
the CALL statement.
For example, suppose that you want to write one program, PROGY, that calls one
of two versions of a stored procedure named PROCX. The load module for both
stored procedures is named SUMMOD. Each version of SUMMOD is in a different
load library. The stored procedures run in different WLM environments, and the
startup JCL for each WLM environment includes a STEPLIB concatenation that
specifies the correct load library for the stored procedure module.
First, define the two stored procedures in different schemas and different WLM
environments:
CREATE PROCEDURE TEST.PROCX(IN V1 INTEGER, OUT V2 CHAR(9))
LANGUAGE C
EXTERNAL NAME SUMMOD
WLM ENVIRONMENT TESTENV;
CREATE PROCEDURE PROD.PROCX(IN V1 INTEGER, OUT V2 CHAR(9))
LANGUAGE C
EXTERNAL NAME SUMMOD
WLM ENVIRONMENT PRODENV;
When you write CALL statements for PROCX in program PROGY, use the
unqualified form of the stored procedure name:
CALL PROCX(V1,V2);
Bind two plans for PROGY. In one BIND statement, specify PATH(TEST). In the
other BIND statement, specify PATH(PROD).
To call TEST.PROCX, execute PROGY with the plan that you bound with
PATH(TEST). To call PROD.PROCX, execute PROGY with the plan that you bound
with PATH(PROD).
To maximize the number of stored procedures that can run concurrently, use the
following guidelines:
v Set REGION size to 0 in startup procedures for the stored procedures address
spaces to obtain the largest possible amount of storage below the 16MB line.
v Limit storage required by application programs below the 16MB line by:
– Link editing programs above the line with AMODE(31) and RMODE(ANY)
attributes
– Using the RENT and DATA(31) compiler options for COBOL programs.
v Limit storage required by IBM Language Environment by using these run-time
options:
– HEAP(,,ANY) to allocate program heap storage above the 16MB line
– STACK(,,ANY,) to allocate program stack storage above the 16MB line
– STORAGE(,,,4K) to reduce reserve storage area below the line to 4KB
– BELOWHEAP(4K,,) to reduce the heap storage below the line to 4KB
– LIBSTACK(4K,,) to reduce the library stack below the line to 4KB
– ALL31(ON) to indicate all programs contained in the stored procedure run with
AMODE(31) and RMODE(ANY).
You can list these options in the RUN OPTIONS parameter of the CREATE
PROCEDURE or ALTER PROCEDURE statement, if they are not Language
Environment installation defaults. For example, the RUN OPTIONS parameter
could specify:
H(,,ANY),STAC(,,ANY,),STO(,,,4K),BE(4K,,),LIBS(4K,,),ALL31(ON)
For more information about creating a stored procedure definition, see
“Defining your stored procedure to DB2” on page 575.
v If you use WLM-established address spaces for your stored procedures, assign
stored procedures to WLM application environments according to guidelines that
are described in Part 5 (Volume 2) of DB2 Administration Guide.
| Multiple instances of a stored procedure can be invoked only if both the client and
| the server are DB2 Version 8 new-function mode or later.
| Storage shortages can occur if too many instances of a stored procedure or if too
| many cursors that return result sets are open at the same time. To minimize these
| storage shortages, two subsystem parameters control the maximum number of
| The maximum number of stored procedure instances and the maximum number of
| open cursors is set on installation panel DSNTIPX. See Part 2 of DB2 Installation
| Guide for more information about setting the maximum number of stored procedure
| instances and the maximum number of open cursors.
Consider the following when you develop stored procedures that access non-DB2
resources:
v When a stored procedure runs in a DB2-established stored procedures address
space, DB2 does not coordinate commit and rollback activity on recoverable
resources such as IMS or CICS transactions, and MQI messages. DB2 has no
knowledge of, and therefore cannot control, the dependency between a stored
procedure and a recoverable resource.
v When a stored procedure runs in a WLM-established stored procedures address
space, the stored procedure uses the Recoverable Resource Manager Services
for commitment control. When DB2 commits or rolls back work in this
environment, DB2 coordinates all updates made to recoverable resources by
other RRS compliant resource managers in the z/OS system.
v When a stored procedure runs in a DB2-established stored procedures address
space, z/OS is not aware that the stored procedures address space is processing
work for DB2. One consequence of this is that z/OS accesses RACF-protected
resources using the user ID associated with the z/OS task (ssnmSPAS) for
stored procedures, not the user ID of the client.
v When a stored procedure runs in a WLM-established stored procedures address
space, DB2 can establish a RACF environment for accessing non-DB2
resources. The authority used when the stored procedure accesses protected
z/OS resources depends on the value of SECURITY in the stored procedure
definition:
– If the value of SECURITY is DB2, the authorization ID associated with the
stored procedures address space is used.
– If the value of SECURITY is USER, the authorization ID under which the
CALL statement is executed is used.
– If the value of SECURITY is DEFINER, the authorization ID under which the
CREATE PROCEDURE statement was executed is used.
v Not all non-DB2 resources can tolerate concurrent access by multiple TCBs in
the same address space. You might need to serialize the access within your
application.
IMS
If your system is not running a release of IMS that uses z/OS RRS, you can
use one of the following methods to access DL/I data from your stored
procedure:
v Use the CICS EXCI interface to run a CICS transaction synchronously. That
CICS transaction can, in turn, access DL/I data.
v Invoke IMS transactions asynchronously using the MQI.
v Use APPC through the CPI Communications application programming
interface
After you write your COBOL stored procedure and set up the WLM environment,
follow these steps to test the stored procedure with the Debug Tool:
1. When you compile the stored procedure, specify the TEST and SOURCE
options.
Ensure that the source listing is stored in a permanent data set. VisualAge
COBOL displays the source listing during the debug session.
2. When you define the stored procedure, include run-time option TEST with the
suboption VADTCPIP&ipaddr in your RUN OPTIONS argument.
VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that
runs VisualAge COBOL and is configured for TCP/IP communication with your
z/OS system. ipaddr is the IP address of the workstation on which you display
your debug information. For example, the RUN OPTIONS value in the following
stored procedure definition indicates that debug information should go to the
workstation with IP address 9.63.51.17:
CREATE PROCEDURE WLMCOB
(IN INTEGER, INOUT VARCHAR(3000), INOUT INTEGER)
MODIFIES SQL DATA
LANGUAGE COBOL EXTERNAL
PROGRAM TYPE MAIN
WLM ENVIRONMENT WLMENV1
RUN OPTIONS ’POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)’
3. In the JCL startup procedure for WLM-established stored procedures address
space, add the data set name of the Debug Tool load library to the STEPLIB
concatenation. For example, suppose that ENV1PROC is the JCL procedure for
application environment WLMENV1. The modified JCL for ENV1PROC might
look like this:
//DSNWLM PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8
//IEFPROC EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT,
// PARM=’&DB2SSN,&NUMTCB,&APPLENV’
//STEPLIB DD DISP=SHR,DSN=DSN810.RUNLIB.LOAD
// DD DISP=SHR,DSN=CEE.SCEERUN
// DD DISP=SHR,DSN=DSN810.SDSNLOAD
// DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
4. On the workstation, start the VisualAge Remote Debugger daemon.
This daemon waits for incoming requests from TCP/IP.
5. Call the stored procedure.
When the stored procedure starts, a window that contains the debug session is
displayed on the workstation. You can then execute Debug Tool commands to
debug the stored procedure.
After you write your C++ stored procedure or SQL procedure and set up the WLM
environment, follow these steps to test the stored procedure with the Distributed
Debugger feature of the C/C++ Productivity Tools for z/OS and the Debug Tool:
1. When you define the stored procedure, include run-time option TEST with the
suboption VADTCPIP&ipaddr in your RUN OPTIONS argument.
VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that
runs VisualAge C++ and is configured for TCP/IP communication with your z/OS
system. ipaddr is the IP address of the workstation on which you display your
debug information. For example, this RUN OPTIONS value in a stored
procedure definition indicates that debug information should go to the
workstation with IP address 9.63.51.17:
RUN OPTIONS ’POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)’
2. Precompile the stored procedure.
Ensure that the modified source program that is the output from the precompile
step is in a permanent, catalogued data set. For an SQL procedure, the
modified C source program that is the output from the second precompile step
must be in a permanent, catalogued data set.
3. Compile the output from the precompile step. Specify the TEST, SOURCE, and
OPT(0) compiler options.
4. In the JCL startup procedure for the stored procedures address space, add the
data set name of the Debug Tool load library to the STEPLIB concatenation. For
example, suppose that ENV1PROC is the JCL procedure for application
environment WLMENV1. The modified JCL for ENV1PROC might look like this:
//DSNWLM PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8
//IEFPROC EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT,
// PARM=’&DB2SSN,&NUMTCB,&APPLENV’
//STEPLIB DD DISP=SHR,DSN=DSN810.RUNLIB.LOAD
// DD DISP=SHR,DSN=CEE.SCEERUN
// DD DISP=SHR,DSN=DSN810.SDSNLOAD
// DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
5. On the workstation, start the Distributed Debugger daemon.
This daemon waits for incoming requests from TCP/IP.
6. Call the stored procedure.
When the stored procedure starts, a window that contains the debug session is
displayed on the workstation. You can then execute Debug Tool commands to
debug the stored procedure.
Debugging with Debug Tool for z/OS interactively and in batch mode
You can use the Debug Tool for z/OS to test z/OS stored procedures written in any
of the supported languages either interactively or in batch mode.
Using Debug Tool interactively: To test a stored procedure interactively using the
Debug Tool, you must have the Debug Tool installed on the z/OS system where the
stored procedure runs. To debug your stored procedure using the Debug Tool, do
the following:
v Compile the stored procedure with option TEST. This places information in the
program that the Debug Tool uses during a debugging session.
v Invoke the Debug Tool. One way to do that is to specify the Language
Environment run-time option TEST. The TEST option controls when and how the
v If you want to save the output from your debugging session, issue the following
command:
SET LOG ON FILE dbgtool.log;
This command saves a log of your debugging session to a file on the workstation
called dbgtool.log. This should be the first command that you enter from the
terminal or include in your commands file.
Using Debug Tool in batch mode: To test your stored procedure in batch mode,
you must have the Debug Tool installed on the z/OS system where the stored
procedure runs. To debug your stored procedure in batch mode using the Debug
Tool, do the following:
v Compile the stored procedure with option TEST, if you plan to use the Language
Environment run-time option TEST to invoke the Debug Tool. This places
information in the program that the Debug Tool uses during a debugging session.
v Allocate a log data set to receive the output from the Debug Tool. Put a DD
statement for the log data set in the start-up procedure for the stored procedures
address space.
v Enter commands in a data set that you want the Debug Tool to execute. Put a
DD statement for that data set in the start-up procedure for the stored
procedures address space. To define the commands data set to the Debug Tool,
specify the commands data set name or DD name in the TEST run-time option.
For example, to specify that the Debug Tool use the commands that are in the
data set that is associated with the DD name TESTDD, include the following
parameter in the TEST option:
TEST(ALL,TESTDD,PROMPT,*)
The first command in the commands data set should be:
SET LOG ON FILE ddname;
This command directs output from your debugging session to the log data set
that you defined in the previous step. For example, if you defined a log data set
For more information about using the Debug Tool for z/OS, see Debug Tool User's
Guide and Reference.
DB2 discards the debugging information if the application executes the ROLLBACK
statement. To prevent the loss of the debugging data, code the calling application
so that it retrieves the diagnostic data before executing the ROLLBACK statement.
If you still have performance problems after you have tried the suggestions in these
sections, you can use other, more risky techniques. See “Special techniques to
influence access path selection” on page 713 for information.
| Declared lengths of host variables: For string comparisons other than equal
| comparisons, ensure that the declared length of a host variable is less than or
| equal to the length attribute of the table column that it is compared to. For
| languages in which character strings are nul-terminated, the string length can be
| less than or equal to the column length plus 1. If the declared length of the host
| variable is greater than the column length, the predicate is stage 1 but cannot be a
| matching predicate for an index scan.
| Because this is a C language example, the host variable length could be 1 byte
| greater than the column length:
| char string_hv[13]
Assuming that subquery 1 and subquery 2 are the same type of subquery (either
correlated or noncorrelated) and the subqueries are stage 2, DB2 evaluates the
subquery predicates in the order they appear in the WHERE clause. Subquery 1
rejects 10% of the total rows, and subquery 2 rejects 80% of the total rows.
If you are unsure, run EXPLAIN on the query with both a correlated and a
noncorrelated subquery. By examining the EXPLAIN output and understanding your
data distribution and SQL statements, you should be able to determine which form
is more efficient.
This general principle can apply to all types of predicates. However, because
subquery predicates can potentially be thousands of times more processor- and
I/O-intensive than all other predicates, the order of subquery predicates is
particularly important.
Refer to “DB2 predicate manipulation” on page 694 to see in what order DB2 will
evaluate predicates and when you can control the evaluation order.
See “Using host variables efficiently” on page 698 for more information.
DB2 might not determine the best access path when your queries include correlated
columns. If you think you have a problem with column correlation, see “Column
correlation” on page 691 for ideas on what to do about it.
If you rewrite the predicate in the following way, DB2 can evaluate it more
efficiently:
WHERE SALARY > 50000/(1 + :hv1)
In the second form, the column is by itself on one side of the operator, and all the
other values are on the other side of the operator. The expression on the right is
called a noncolumn expression. DB2 can evaluate many predicates with noncolumn
expressions at an earlier stage of processing called stage 1, so the queries take
less time to run.
| Materialized query tables are user-created tables. Depending on how the tables are
| defined, they are user-maintained or system-maintained. If you have set subsystem
| parameters or an application sets special registers to tell DB2 to use materialized
| query tables, when DB2 executes a dynamic query, DB2 uses the contents of
| applicable materialized query tables if DB2 finds a performance advantage to doing
| so.
| For information about materialized query tables, see Part 5 (Volume 2) of DB2
| Administration Guide.
Example: The following query has three predicates: an equal predicate on C1, a
BETWEEN predicate on C2, and a LIKE predicate on C3.
SELECT * FROM T1
WHERE C1 = 10 AND
C2 BETWEEN 10 AND 20 AND
C3 NOT LIKE ’A%’
Effect on access paths: This section explains the effect of predicates on access
paths. Because SQL allows you to express the same query in different ways,
knowing how predicates affect path selection helps you write queries that access
data efficiently.
Properties of predicates
Predicates in a HAVING clause are not used when selecting access paths; hence,
in this section the term ’predicate’ means a predicate after WHERE or ON.
There are special considerations for “Predicates in the ON clause” on page 678.
Predicate types
The type of a predicate depends on its operator or syntax. The type determines
what type of processing and filtering occurs when the predicate is evaluated.
Table 90 shows the different predicate types.
Table 90. Definitions and examples of predicate types
Type Definition Example
Subquery Any predicate that includes another C1 IN (SELECT C10 FROM
SELECT statement. TABLE1)
Equal Any predicate that is not a subquery C1=100
predicate and has an equal operator and
no NOT operator. Also included are
predicates of the form C1 IS NULL and C
IS NOT DISTINCT FROM.
Range Any predicate that is not a subquery C1>100
predicate and has an operator in the
following list: >, >=, <, <=, LIKE, or
BETWEEN.
IN-list A predicate of the form column IN (list of C1 IN (5,10,15)
values).
NOT Any predicate that is not a subquery COL1 <> 5 or COL1 NOT
predicate and contains a NOT operator. BETWEEN 10 AND 20
Also included are predicates of the form
C1 IS DISTINCT FROM.
Example: Influence of type on access paths: The following two examples show
how the predicate type can influence DB2’s choice of an access path. In each one,
assume that a unique index I1 (C1) exists on table T1 (C1, C2), and that all values
of C1 are positive integers.
However, the predicate does not eliminate any rows of T1. Therefore, it could be
determined during bind that a table space scan is more efficient than the index
scan.
DB2 chooses the index access in this case because the index is highly selective on
column C1.
Examples: If the employee table has an index on the column LASTNAME, the
following predicate can be a matching predicate:
SELECT * FROM DSN8810.EMP WHERE LASTNAME = ’SMITH’;
| The predicate is not indexable because the length of the column is shorter than
| the length of the constant.
| Example: The following predicate is not stage 1:
| DECCOL>34.5, where DECCOL is defined as DECIMAL(18,2)
| The predicate is not stage 1 because the precision of the decimal column is
| greater than 15.
v Whether DB2 evaluates the predicate before or after a join operation. A predicate
that is evaluated after a join operation is always a stage 2 predicate.
| v Join sequence
| The same predicate might be stage 1 or stage 2, depending on the join
| sequence. Join sequence is the order in which DB2 joins tables when it
| evaluates a query. The join sequence is not necessarily the same as the order in
| which the tables appear in the predicate.
| Example: This predicate might be stage 1 or stage 2:
| T1.C1=T2.C1+1
All indexable predicates are stage 1. The predicate C1 LIKE %BC is stage 1, but is
not indexable.
In join operations, Boolean term predicates can reject rows at an earlier stage than
can non-Boolean term predicates.
For left and right outer joins, and for inner joins, join predicates in the ON clause
are treated the same as other stage 1 and stage 2 predicates. A stage 2 predicate
in the ON clause is treated as a stage 2 predicate of the inner table.
For full outer join, the ON clause is evaluated during the join operation like a stage
2 predicate.
In an outer join, predicates that are evaluated after the join are stage 2 predicates.
Predicates in a table expression can be evaluated before the join and can therefore
be stage 1 predicates.
Example: In the following statement, the predicate “EDLEVEL > 100” is evaluated
before the full join and is a stage 1 predicate:
SELECT * FROM (SELECT * FROM DSN8810.EMP
WHERE EDLEVEL > 100) AS X FULL JOIN DSN8810.DEPT
ON X.WORKDEPT = DSN8810.DEPT.DEPTNO;
The second set of rules describes the order of predicate evaluation within each of
the stages:
| 1. All equal predicates (including column IN list, where list has only one element,
| or column BETWEEN value1 AND value1) are evaluated.
2. All range predicates and predicates of the form column IS NOT NULL are
evaluated.
3. All other predicate types are evaluated.
After both sets of rules are applied, predicates are evaluated in the order in which
they appear in the query. Because you specify that order, you have some control
over the order of evaluation.
By using correlation names, the query treats one table as if it were two
separate tables. Therefore, indexes on columns C1 and C2 are considered for
access.
4. If the subquery has already been evaluated for a given correlation value, then
the subquery might not have to be reevaluated.
5. Not indexable or stage 1 if a field procedure exists on that column.
6. The column on the left side of the join sequence must be in a different table
from any columns on the right side of the join sequence.
7. The tables that contain the columns in expression1 or expression2 must
already have been accessed.
8. The processing for WHERE NOT COL = value is like that for WHERE COL <>
value, and so on.
9. If noncol expr, noncol expr1, or noncol expr2 is a noncolumn expression of
one of these forms, then the predicate is not indexable:
v noncol expr + 0
v noncol expr - 0
v noncol expr * 1
v noncol expr / 1
v noncol expr CONCAT empty string
| 10. COL, COL1, and COL2 can be the same column or different columns. The
| columns are in the same table.
| 11. Any of the following sets of conditions make the predicate stage 2:
| v The left side of the join sequence is DECIMAL(p,s), where p>15, and the
| right side of the join sequence is REAL or FLOAT.
| v The left side of the join sequence is CHAR, VARCHAR, GRAPHIC, or
| VARGRAPHIC, and the right side of the join sequence is DATE, TIME, or
| TIMESTAMP.
| 12. The predicate is stage 1 but not indexable if the left side of the join sequence
| is CHAR or VARCHAR, the right side of the join sequence is GRAPHIC or
| VARGRAPHIC, and the left side of the join sequence is not Unicode mixed.
| 13. If both sides of the comparison are strings, any of the following sets of
| conditions makes the predicate stage 1 but not indexable:
| v The left side of the join sequence is CHAR or VARCHAR, and the right side
| of the join sequence is GRAPHIC or VARGRAPHIC.
| v Both of the following conditions are true:
| – Both sides of the comparison are CHAR or VARCHAR.
| – The length the left side of the join sequence is less than the length of the
| right side of the join sequence.
| v Both of the following conditions are true:
| – Both sides of the comparison are GRAPHIC or VARGRAPHIC.
The following examples of predicates illustrate the general rules shown in Table 91
on page 680. In each case, assume that there is an index on columns
(C1,C2,C3,C4) of the table and that 0 is the lowest value in each column.
v WHERE C1=5 AND C2=7
Both predicates are stage 1 and the compound predicate is indexable. A
matching index scan could be used with C1 and C2 as matching columns.
v WHERE C1=5 AND C2>7
Example: Suppose that DB2 can determine that column C1 of table T contains only
five distinct values: A, D, Q, W and X. In the absence of other information, DB2
estimates that one-fifth of the rows have the value D in column C1. Then the
predicate C1=’D’ has the filter factor 0.2 for table T.
How DB2 uses filter factors: Filter factors affect the choice of access paths by
estimating the number of rows qualified by a set of predicates.
Recommendation: Control the first two of those variables when you write a
predicate. Your understanding of how DB2 uses filter factors should help you write
more efficient predicates.
Values of the third variable, statistics on the column, are kept in the DB2 catalog.
You can update many of those values, either by running the utility RUNSTATS or by
executing UPDATE for a catalog table. For information about using RUNSTATS, see
. see the discussion of maintaining statistics in the catalog in Part 4 (Volume 1) of
DB2 Administration Guide For information on updating the catalog manually, see
“Updating catalog statistics” on page 723.
If you intend to update the catalog with statistics of your own choice, you should
understand how DB2 uses:
v “Default filter factors for simple predicates”
v “Filter factors for uniform distributions”
v “Interpolation formulas” on page 687
v “Filter factors for all distributions” on page 688
Example: The default filter factor for the predicate C1 = ’D’ is 1/25 (0.04). If D is
actually not close to 0.04, the default probably does not lead to an optimal access
path.
Table 92. DB2 default filter factors by predicate type
Predicate Type Filter Factor
Col = literal 1/25
Col <> literal 1 – (1/25)
Col IS NULL 1/25
| Col IS NOT DISTINCT FROM 1/25
| Col IS DISTINCT FROM 1 – (1/25)
Col IN (literal list) (number of literals)/25
Col Op literal 1/3
Col LIKE literal 1/10
Col BETWEEN literal1 and literal2 1/10
Note:
Op is one of these operators: <, <=, >, >=.
Literal is any constant value that is known at bind time.
Example: If D is one of only five values in column C1, using RUNSTATS puts the
value 5 in column COLCARDF of SYSCOLUMNS. If there are no additional
statistics available, the filter factor for the predicate C1 = ’D’ is 1/5 (0.2).
Table 93. DB2 uniform filter factors by predicate type
Predicate type Filter factor
Col = literal 1/COLCARDF
Col <> literal 1 – (1/COLCARDF)
Col IS NULL 1/COLCARDF
| Col IS NOT DISTINCT FROM 1/COLCARDF
| Col IS DISTINCT FROM 1 – (1/COLCARDF)
Col IN (literal list) number of literals /COLCARDF
Col Op1 literal interpolation formula
Col Op2 literal interpolation formula
Col LIKE literal interpolation formula
Col BETWEEN literal1 and literal2 interpolation formula
Note:
Op1 is < or <=, and the literal is not a host variable.
Op2 is > or >=, and the literal is not a host variable.
Literal is any constant value that is known at bind time.
Filter factors for other predicate types: The examples selected in Table 92 on
page 686 and Table 93 represent only the most common types of predicates. If P1
is a predicate and F is its filter factor, then the filter factor of the predicate NOT P1
is (1 - F). But, filter factor calculation is dependent on many things, so a specific
filter factor cannot be given for all predicate types.
Interpolation formulas
Definition: For a predicate that uses a range of values, DB2 calculates the filter
factor by an interpolation formula. The formula is based on an estimate of the ratio
of the number of values in the range to the number of values in the entire column of
the table.
The formulas: The formulas that follow are rough estimates, subject to further
modification by DB2. They apply to a predicate of the form col op. literal. The
value of (Total Entries) in each formula is estimated from the values in columns
HIGH2KEY and LOW2KEY in catalog table SYSIBM.SYSCOLUMNS for column col:
Total Entries = (HIGH2KEY value - LOW2KEY value).
v For the operators < and <=, where the literal is not a host variable:
(Literal value - LOW2KEY value) / (Total Entries)
v For the operators > and >=, where the literal is not a host variable:
(HIGH2KEY value - Literal value) / (Total Entries)
v For LIKE or BETWEEN:
(High literal value - Low literal value) / (Total Entries)
For the predicate C1 BETWEEN 800 AND 1100, DB2 calculates the filter factor F as:
F = (1100 - 800)/1200 = 1/4 = 0.25
Defaults for interpolation: DB2 might not interpolate in some cases; instead, it
can use a default filter factor. Defaults for interpolation are:
v Relevant only for ranges, including LIKE and BETWEEN predicates
v Used only when interpolation is not adequate
v Based on the value of COLCARDF
v Used whether uniform or additional distribution statistics exist on the column if
either of the following conditions is met:
– The predicate does not contain constants
– COLCARDF < 4.
Table 94 shows interpolation defaults for the operators <, <=, >, >= and for LIKE
and BETWEEN.
Table 94. Default filter factors for interpolation
Factor for LIKE
COLCARDF Factor for Op or BETWEEN
>=100000000 1/10,000 3/100000
>=10000000 1/3,000 1/10000
>=1000000 1/1,000 3/10000
>=100000 1/300 1/1000
>=10000 1/100 3/1000
>=1000 1/30 1/100
>=100 1/10 3/100
| >=2 1/3 1/10
| =1 1/1 1/1
| <=0 1/3 1/10
Note: Op is one of these operators: <, <=, >, >=.
When they are used: Table 95 lists the types of predicates on which these
statistics are used.
Table 95. Predicates for which distribution statistics are used
Single column or
Type of statistic concatenated columns Predicates
Frequency Single COL=literal
COL IS NULL
COL IN (literal-list)
COL op literal
COL BETWEEN literal AND literal
COL=host-variable
COL1=COL2
T1.COL=T2.COL
| COL IS NOT DISTINCT FROM
Frequency Concatenated COL=literal
| COL IS NOT DISTINCT FROM
Cardinality Single COL=literal
COL IS NULL
COL IN (literal-list)
COL op literal
COL BETWEEN literal AND literal
COL=host-variable
COL1=COL2
T1.COL=T2.COL
| COL IS NOT DISTINCT FROM
Cardinality Concatenated COL=literal
COL=:host-variable
COL1=COL2
| COL IS NOT DISTINCT FROM
Note: op is one of these operators: <, <=, >, >=.
| You can run RUNSTATS without the FREQVAL option, with the FREQVAL option in
| the correl-spec, with the FREQVAL option in the colgroup-spec, or in both, with the
| following effects:
| v If you run RUNSTATS without the FREQVAL option, RUNSTATS inserts rows for
| the 10 most frequent values for the first column of the specified index.
| v If you run RUNSTATS with the FREQVAL option in the correl-spec, RUNSTATS
| inserts rows for concatenated columns of an index. The NUMCOLS option
| specifies the number of concatenated index columns. The COUNT option
| specifies the number of frequent values. You can collect most-frequent values,
| least-frequent values, or both.
| v If you run RUNSTATS with the FREQVAL option in the colgroup-spec,
| RUNSTATS inserts rows for the columns in the column group that you specify.
| The COUNT option specifies the number of frequent values. You can collect
| most-frequent values, least-frequent values, or both.
See Part 2 of DB2 Utility Guide and Reference for more information about
RUNSTATS. DB2 uses the frequencies in column FREQUENCYF for predicates that
use the values in column COLVALUE and assumes that the remaining data are
uniformly distributed.
Suppose that the predicate is C1 IN (’3’,’5’) and that SYSCOLDIST contains these
values for column C1:
COLVALUE FREQUENCYF
’3’ .0153
’5’ .0859
’8’ .0627
| Suppose that columns C1 and C2 are correlated. Suppose also that the predicate is
C1=’3’ AND C2=’5’ and that SYSCOLDIST contains these values for columns C1
and C2:
COLVALUE FREQUENCYF
’1’ ’1’ .1176
’2’ ’2’ .0588
’3’ ’3’ .0588
’3’ ’5’ .1176
’4’ ’4’ .0588
’5’ ’3’ .1764
’5’ ’5’ .3529
’6’ ’6’ .0588
| Table T1 consists of columns C1, C2, C3, and C4. Index I1 is defined on table T1
| and contains columns C1, C2, and C3.
| Suppose that the simple predicates in the compound predicate have the following
| characteristics:
| C1='A' Matching predicate
| C3='B' Screening predicate
| C4='C' Stage 1, nonindexable predicate
| To determine the cost of accessing table T1 through index I1, DB2 performs these
| steps:
| Important: If you supply appropriate statistics at each level of filtering, DB2 is more
| likely to choose the most efficient access path.
Column correlation
Two columns of data, A and B of a single table, are correlated if the values in
column A do not vary independently of the values in column B.
Example: Table 96 is an excerpt from a large single table. Columns CITY and
STATE are highly correlated, and columns DEPTNO and SEX are entirely
independent.
Table 96. Data from the CREWINFO table
CITY STATE DEPTNO SEX EMPNO ZIPCODE
Fresno CA A345 F 27375 93650
Fresno CA J123 M 12345 93710
Fresno CA J123 F 93875 93650
Fresno CA J123 F 52325 93792
New York NY J123 M 19823 09001
New York NY A345 M 15522 09530
Miami FL B499 M 83825 33116
Miami FL A345 F 35785 34099
Los Angeles CA X987 M 12131 90077
Los Angeles CA A345 M 38251 90091
In this simple example, for every value of column CITY that equals 'FRESNO', there
is the same value in column STATE ('CA').
When the columns in a predicate correlate but the correlation is not reflected in
catalog statistics, the actual filtering effect to be significantly different from the DB2
filter factor. Table 97 on page 693 shows how the actual filtering effect and the DB2
filter factor can differ, and how that difference can affect index choice and
performance.
DB2 chooses an index that returns the fewest rows, partly determined by the
smallest filter factor of the matching columns. Assume that filter factor is the only
influence on the access path. The combined filtering of columns CITY and STATE
seems very good, whereas the matching columns for the second index do not seem
to filter as much. Based on those calculations, DB2 chooses Index 1 as an access
path for Query 1.
The problem is that the filtering of columns CITY and STATE should not look good.
Column STATE does almost no filtering. Since columns DEPTNO and SEX do a
better job of filtering out rows, DB2 should favor Index 2 over Index 1.
In the case of Index 3, because the columns CITY and STATE of Predicate 1 are
correlated, the index access is not improved as much as estimated by the
screening predicates and therefore Index 4 might be a better choice. (Note that
index screening also occurs for indexes with matching columns greater than zero.)
Multiple table joins: In Query 2, Table 98 on page 694 is added to the original
query (see “Query 1” on page 692) to show the impact of column correlation on
join queries.
Query 2
SELECT ... FROM CREWINFO T1,DEPTINFO T2
WHERE T1.CITY = ’FRESNO’ AND T1.STATE=’CA’ (PREDICATE 1)
AND T1.DEPTNO = T2.DEPT AND T2.DEPTNAME = ’LEGAL’;
The order that tables are accessed in a join statement affects performance. The
estimated combined filtering of Predicate1 is lower than its actual filtering. So table
CREWINFO might look better as the first table accessed than it should.
Also, due to the smaller estimated size for table CREWINFO, a nested loop join
might be chosen for the join method. But, if many rows are selected from table
CREWINFO because Predicate1 does not filter as many rows as estimated, then
another join method or join sequence might be better.
The last two techniques are discussed in “Special techniques to influence access
path selection” on page 713.
The RUNSTATS utility collects the statistics DB2 needs to make proper choices
about queries. With RUNSTATS, you can collect statistics on the concatenated key
columns of an index and the number of distinct values for those concatenated
columns. This gives DB2 accurate information to calculate the filter factor for the
query.
where:
v The first three index keys are used (MATCHCOLS = 3).
v An index exists on C1, C2, C3, C4, C5.
v Some or all of the columns in the index are correlated in some way.
Therefore, to understand your PLAN_TABLE results, you must understand how DB2
manipulates predicates. The information in Table 91 on page 680 is also helpful.
A set of simple, Boolean term, equal predicates on the same column that are
connected by OR predicates can be converted into an IN-list predicate. For
example: C1=5 or C1=10 or C1=15 converts to C1 IN (5,10,15).
The outer join operation gives you these result table rows:
v The rows with matching values of C1 in tables T1 and T2 (the inner join result)
v The rows from T1 where C1 has no corresponding value in T2
v The rows from T2 where C1 has no corresponding value in T1
However, when you apply the predicate, you remove all rows in the result table that
came from T2 where C1 has no corresponding value in T1. DB2 transforms the full
join into a left join, which is more efficient:
SELECT * FROM T1 X LEFT JOIN T2 Y
ON X.C1=Y.C1
WHERE X.C2 > 12;
Example: The predicate, X.C2>12, filters out all null values that result from the right
join:
SELECT * FROM T1 X RIGHT JOIN T2 Y
ON X.C1=Y.C1
WHERE X.C2>12;
Therefore, DB2 can transform the right join into a more efficient inner join without
changing the result:
SELECT * FROM T1 X INNER JOIN T2 Y
ON X.C1=Y.C1
WHERE X.C2>12;
The predicate that follows a join operation must have the following characteristics
before DB2 transforms an outer join into a simpler outer join or into an inner join:
v The predicate is a Boolean term predicate.
These predicates are examples of predicates that can cause DB2 to simplify join
operations:
T1.C1 > 10
T1.C1 IS NOT NULL
T1.C1 > 10 OR T1.C2 > 15
T1.C1 > T2.C1
T1.C1 IN (1,2,4)
T1.C1 LIKE 'ABC%'
T1.C1 BETWEEN 10 AND 100
12 BETWEEN T1.C1 AND 100
Example: This examples shows how DB2 can simplify a join operation because the
query contains an ON clause that eliminates rows with unmatched values:
SELECT * FROM T1 X LEFT JOIN T2 Y
FULL JOIN T3 Z ON Y.C1=Z.C1
ON X.C1=Y.C1;
Because the last ON clause eliminates any rows from the result table for which
column values that come from T1 or T2 are null, DB2 can replace the full join with a
more efficient left join to achieve the same result:
SELECT * FROM T1 X LEFT JOIN T2 Y
LEFT JOIN T3 Z ON Y.C1=Z.C1
ON X.C1=Y.C1;
In one case, DB2 transforms a full outer join into a left join when you cannot write
code to do it. This is the case where a view specifies a full outer join, but a
subsequent query on that view requires only a left outer join.
This view contains rows for which values of C2 that come from T1 are null.
However, if you execute the following query, you eliminate the rows with null values
for C2 that come from T1:
SELECT * FROM V1
WHERE T1C2 > 10;
Therefore, for this query, a left join between T1 and T2 would have been adequate.
DB2 can execute this query as if the view V1 was generated with a left outer join so
that the query runs more efficiently.
Rules for generating predicates: For single-table or inner join queries, DB2
generates predicates for transitive closure if:
v The query has an equal type predicate: COL1=COL2. This could be:
– A local predicate
For outer join queries, DB2 generates predicates for transitive closure if the query
has an ON clause of the form COL1=COL2 and a before join predicate that has
one of the following formats:
v COL1 op value
op is =, <>, >, >=, <, or <=
v COL1 (NOT) BETWEEN value1 AND value2
DB2 generates a transitive closure predicate for an outer join query only if the
generated predicate does not reference the table with unmatched rows. That is, the
generated predicate cannot reference the left table for a left outer join or the right
table for a right outer join.
| For a multiple-CCSID query, DB2 does not generate a transitive closure predicate if
| the predicate that would be generated has these characteristics:
| v The generated predicate is a range predicate (op is >, >=, <, or <=).
| v Evaluation of the query with the generated predicate results in different CCSID
| conversion from evaluation of the query without the predicate. See Chapter 4 of
| DB2 SQL Reference for information on CCSID conversion.
When a predicate meets the transitive closure conditions, DB2 generates a new
predicate, whether or not it already exists in the WHERE clause.
Example of transitive closure for an inner join: Suppose that you have written
this query, which meets the conditions for transitive closure:
SELECT * FROM T1, T2
WHERE T1.C1=T2.C1 AND
T1.C1>10;
| The before join predicate, T1.C1>10, meets the conditions for transitive closure, so
| DB2 generates a query that has the same result as this more-efficient query:
| SELECT * FROM
| (SELECT T1.C1 FROM T1 WHERE T1.C1>10) X
| LEFT JOIN
| (SELECT T2.C1 FROM T2 WHERE T2.C1>10) Y
| ON X.C1 = Y.C1;
| Adding extra predicates: DB2 performs predicate transitive closure only on equal
| and range predicates. However, you can help DB2 to choose a better access path
| by adding transitive closure predicates for other types of operators, such as IN or
| LIKE. For example, consider the following SELECT statement:
| SELECT * FROM T1,T2
| WHERE T1.C1=T2.C1
| AND T1.C1 LIKE ’A%’;
| If T1.C1=T2.C1 is true, and T1.C1 LIKE ’A%’ is true, then T2.C1 LIKE ’A%’ must also
| be true. Therefore, you can give DB2 extra information for evaluating the query by
| adding T2.C1 LIKE ’A%’:
| SELECT * FROM T1,T2
| WHERE T1.C1=T2.C1
| AND T1.C1 LIKE ’A%’
| AND T2.C1 LIKE ’A%’;
The two ways to change the access path for a query that contains host variables
are:
v Bind the package or plan that contains the query with the option
REOPT(ALWAYS) or the option REOPT(ONCE).
v Rewrite the query.
| Example: To determine which queries in plans and packages that are bound with
| the REOPT(ALWAYS) bind option will be reoptimized at run time, execute the
| following SELECT statements:
| SELECT PLNAME,
| CASE WHEN STMTNOI <> 0
| THEN STMTNOI
| ELSE STMTNO
| END AS STMTNUM,
| SEQNO, TEXT
| FROM SYSIBM.SYSSTMT
| WHERE STATUS IN (’B’,’F’,’G’,’J’)
| ORDER BY PLNAME, STMTNUM, SEQNO;
| SELECT COLLID, NAME, VERSION,
| CASE WHEN STMTNOI <> 0
| THEN STMTNOI
| ELSE STMTNO
| END AS STMTNUM,
| SEQNO, STMT
| FROM SYSIBM.SYSPACKSTMT
| WHERE STATUS IN (’B’,’F’,’G’,’J’)
| ORDER BY COLLID, NAME, VERSION, STMTNUM, SEQNO;
| If you specify the bind option VALIDATE(RUN), and a statement in the plan or
| package is not bound successfully, that statement is incrementally bound at run
| time. If you also specify the bind option REOPT(ALWAYS), DB2 reoptimizes the
| access path during the incremental bind.
| Example: To determine which plans and packages have statements that will be
| incrementally bound, execute the following SELECT statements:
| SELECT DISTINCT NAME
| FROM SYSIBM.SYSSTMT
| WHERE STATUS = ’F’ OR STATUS = ’H’;
| SELECT DISTINCT COLLID, NAME, VERSION
| FROM SYSIBM.SYSPACKSTMT
| WHERE STATUS = ’F’ OR STATUS = ’H’;
| To use the REOPT(ONCE) bind option most efficiently, first determine which
| dynamic SQL statements in your applications perform poorly with the
| REOPT(NONE) bind option and the REOPT(ALWAYS) bind option. Separate the
| code containing those statements into units that you bind into packages with the
| REOPT(ONCE) option. Bind the rest of the code into packages using the
| REOPT(NONE) bind option or the REOPT(ALWAYS) bind option, as appropriate.
| Then bind the plan with the REOPT(NONE) bind option. A dynamic statement in a
| package that is bound with REOPT(ONCE) is a candidate for reoptimization the first
| time that the statement is run.
| Example: To determine which queries in plans and packages that are bound with
| the REOPT(ONCE) bind option will be reoptimized at run time, execute the
| following SELECT statements:
| SELECT PLNAME,
| CASE WHEN STMTNOI <> 0
| THEN STMTNOI
| ELSE STMTNO
| END AS STMTNUM,
| SEQNO, TEXT
| FROM SYSIBM.SYSSTMT
| WHERE STATUS IN (’J’)
| ORDER BY PLNAME, STMTNUM, SEQNO;
| SELECT COLLID, NAME, VERSION,
| CASE WHEN STMTNOI <> 0
| THEN STMTNOI
| ELSE STMTNO
| END AS STMTNUM,
| SEQNO, STMT
| FROM SYSIBM.SYSPACKSTMT
| WHERE STATUS IN (’J’)
| ORDER BY COLLID, NAME, VERSION, STMTNUM, SEQNO;
| If you specify the bind option VALIDATE(RUN), and a statement in the plan or
| package is not bound successfully, that statement is incrementally bound at run
| time.
| Example: To determine which plans and packages have statements that will be
| incrementally bound, execute the following SELECT statements:
| SELECT DISTINCT NAME
| FROM SYSIBM.SYSSTMT
| WHERE STATUS = ’F’ OR STATUS = ’H’;
| SELECT DISTINCT COLLID, NAME, VERSION
| FROM SYSIBM.SYSPACKSTMT
| WHERE STATUS = ’F’ OR STATUS = ’H’;
An equal predicate has a default filter factor of 1/COLCARDF. The actual filter factor
might be quite different.
Query:
SELECT * FROM DSN8810.EMP
WHERE SEX = :HV1;
Assumptions: Because the column SEX has only two different values, 'M' and 'F',
the value COLCARDF for SEX is 2. If the numbers of male and female employees
are not equal, the actual filter factor of 1/2 is larger or smaller than the default,
depending on whether :HV1 is set to 'M' or 'F'.
Recommendation: One of these two actions can improve the access path:
v Bind the package or plan that contains the query with the REOPT(ALWAYS) bind
option. This action causes DB2 to reoptimize the query at run time, using the
input values you provide. You might also consider binding the package or plan
with the REOPT(ONCE) bind option.
v Write predicates to influence the DB2 selection of an access path, based on your
knowledge of actual filter factors. For example, you can break the query into
three different queries, two of which use constants. DB2 can then determine the
exact filter factor for most cases when it binds the plan.
SELECT (HV1);
WHEN ('M')
DO;
EXEC SQL SELECT * FROM DSN8810.EMP
WHERE SEX = 'M';
END;
WHEN ('F')
DO;
EXEC SQL SELECT * FROM DSN8810.EMP
WHERE SEX = 'F';
END;
OTHERWISE
DO:
EXEC SQL SELECT * FROM DSN8810.EMP
WHERE SEX = :HV1;
END;
END;
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
Recommendation: If DB2 does not choose T1X1, rewrite the query as follows, so
that DB2 does not choose index T1X2 on C2:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
AND C2 BETWEEN :HV3 AND :HV4;
Assumptions: You know that the application provides both narrow and wide ranges
on C1 and C2. Hence, default filter factors do not allow DB2 to choose the best
access path in all cases. For example, a small range on C1 favors index T1X1 on
C1, a small range on C2 favors index T1X2 on C2, and wide ranges on both C1
and C2 favor a table space scan.
Recommendation: If DB2 does not choose the best access path, try either of the
following changes to your application:
v Use a dynamic SQL statement and embed the ranges of C1 and C2 in the
statement. With access to the actual range values, DB2 can estimate the actual
filter factors for the query. Preparing the statement each time it is executed
requires an extra step, but it can be worthwhile if the query accesses a large
amount of data.
v Include some simple logic to check the ranges of C1 and C2, and then execute
one of these static SQL statements, based on the ranges of C1 and C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2
AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Example 4: ORDER BY
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
ORDER BY C2;
If the actual number of rows that satisfy the range predicate is significantly different
from the estimate, DB2 might not choose the best access path.
Tables A, B, and C each have indexes on columns C1, C2, C3, and C4.
Query:
SELECT * FROM A, B, C
WHERE A.C1 = B.C1
AND A.C2 = C.C2
AND A.C2 BETWEEN :HV1 AND :HV2
AND A.C3 BETWEEN :HV3 AND :HV4
AND A.C4 < :HV5
AND B.C2 BETWEEN :HV6 AND :HV7
AND B.C3 < :HV8
AND C.C2 < :HV9;
Assumptions: The actual filter factors on table A are much larger than the default
factors. Hence, DB2 underestimates the number of rows selected from table A and
wrongly chooses that as the first table in the join.
The result of making the join predicate between A and B a nonindexable predicate
(which cannot be used in single index access) disfavors the use of the index on
column C1. This, in turn, might lead DB2 to access table A or B first. Or, it might
lead DB2 to change the access type of table A or B, thereby influencing the join
sequence of the other tables.
Decision needed: You can often write two or more SQL statements that achieve
identical results, particularly if you use subqueries. The statements have different
access paths, however, and probably perform differently.
Topic overview: The topics that follow describe different methods to achieve the
results intended by a subquery and tell what DB2 does for each method. The
information should help you estimate what method performs best for your query.
Finally, for a comparison of the three methods as applied to a single task, see:
v “Subquery tuning” on page 709
Correlated subqueries
Definition: A correlated subquery refers to at least one column of the outer query.
Example: In the following query, the correlation name, X, illustrates the subquery’s
reference to the outer query block.
SELECT * FROM DSN8810.EMP X
WHERE JOB = ’DESIGNER’
AND EXISTS (SELECT 1
FROM DSN8810.PROJ
WHERE DEPTNO = X.WORKDEPT
AND MAJPROJ = ’MA2100’);
What DB2 does: A correlated subquery is evaluated for each qualified row of the
outer query that is referred to. In executing the example, DB2:
1. Reads a row from table EMP where JOB=’DESIGNER’.
2. Searches for the value of WORKDEPT from that row, in a table stored in
memory.
The in-memory table saves executions of the subquery. If the subquery has
already been executed with the value of WORKDEPT, the result of the subquery
is in the table and DB2 does not execute it again for the current row. Instead,
DB2 can skip to step 5.
3. Executes the subquery, if the value of WORKDEPT is not in memory. That
requires searching the PROJ table to check whether there is any project, where
MAJPROJ is ’MA2100’, for which the current WORKDEPT is responsible.
4. Stores the value of WORKDEPT and the result of the subquery in memory.
5. Returns the values of the current row of EMP to the application.
Notes on the in-memory table: The in-memory table is applicable if the operator
of the predicate that contains the subquery is one of the following operators:
Noncorrelated subqueries
Definition: A noncorrelated subquery makes no reference to outer queries.
Example:
SELECT * FROM DSN8810.EMP
WHERE JOB = ’DESIGNER’
AND WORKDEPT IN (SELECT DEPTNO
FROM DSN8810.PROJ
WHERE MAJPROJ = ’MA2100’);
What DB2 does: A noncorrelated subquery is executed once when the cursor is
opened for the query. What DB2 does to process it depends on whether it returns a
single value or more than one value. The query in the preceding example can
return more than one value.
Single-value subqueries
When the subquery is contained in a predicate with a simple operator, the subquery
is required to return 1 or 0 rows. The simple operator can be one of the following
operators:
<, <=, >, >=, =, <>, NOT <, NOT <=, NOT >, NOT >=
What DB2 does: When the cursor is opened, the subquery executes. If it returns
more than one row, DB2 issues an error. The predicate that contains the subquery
is treated like a simple predicate with a constant specified, for example,
WORKDEPT <= ’value’.
| Stage 1 and stage 2 processing: The rules for determining whether a predicate
| with a noncorrelated subquery that returns a single value is stage 1 or stage 2 are
| generally the same as for the same predicate with a single variable.
What DB2 does: If possible, DB2 reduces a subquery that returns more than one
row to one that returns only a single row. That occurs when there is a range
comparison along with ANY, ALL, or SOME. The following query is an example:
SELECT * FROM DSN8810.EMP
WHERE JOB = ’DESIGNER’
AND WORKDEPT <= ANY (SELECT DEPTNO
FROM DSN8810.PROJ
WHERE MAJPROJ = ’MA2100’);
DB2 calculates the maximum value for DEPTNO from table DSN8810.PROJ and
removes the ANY keyword from the query. After this transformation, the subquery is
treated like a single-value subquery.
That transformation can be made with a maximum value if the range operator is:
v > or >= with the quantifier ALL
v < or <= with the quantifier ANY or SOME
The transformation can be made with a minimum value if the range operator is:
v < or <= with the quantifier ALL
v > or >= with the quantifier ANY or SOME
When the subquery result is a character data type and the left side of the predicate
is a datetime data type, then the result is placed in a work file without sorting. For
some noncorrelated subqueries that use IN, NOT IN, = ANY, <> ANY, = ALL, or <>
ALL comparison operators, DB2 can more accurately pinpoint an entry point into the
work file, thus further reducing the amount of scanning that is done.
Results from EXPLAIN: For information about the result in a plan table for a
subquery that is sorted, see “When are aggregate functions evaluated?
(COLUMN_FN_EVAL)” on page 745.
For a SELECT statement, DB2 does the transformation if the following conditions
are true:
v The transformation does not introduce redundancy.
v The subquery appears in a WHERE clause.
For an UPDATE or DELETE statement, or a SELECT statement that does not meet
the previous conditions for transformation, DB2 does the transformation of a
correlated subquery into a join if the following conditions are true:
v The transformation does not introduce redundancy.
v The subquery is correlated to its immediate outer query.
v The FROM clause of the subquery contains only one table, and the outer query
(for SELECT), UPDATE, or DELETE references only one table.
v If the outer predicate is a quantified predicate with an operator of =ANY or an IN
predicate, the following conditions are true:
– The left side of the outer predicate is a single column.
– The right side of the outer predicate is a subquery that references a single
column.
– The two columns have the same data type and length.
v The subquery does not contain the GROUP BY or DISTINCT clauses.
v The subquery does not contain aggregate functions.
v The SELECT clause of the subquery does not contain a user-defined function
with an external action or a user-defined function that modifies data.
v The subquery predicate is a Boolean term predicate.
v The predicates in the subquery that provide correlation are stage 1 predicates.
v The subquery does not contain nested subqueries.
v The subquery does not contain a self-referencing UPDATE or DELETE.
v For a SELECT statement, the query does not contain the FOR UPDATE OF
clause.
v For an UPDATE or DELETE statement, the statement is a searched UPDATE or
DELETE.
v For a SELECT statement, parallelism is not enabled.
For a statement with multiple subqueries, DB2 does the transformation only on the
last subquery in the statement that qualifies for transformation.
Example: The following subquery can be transformed into a join because it meets
the first set of conditions for transformation:
SELECT * FROM EMP
WHERE DEPTNO IN
(SELECT DEPTNO FROM DEPT
WHERE LOCATION IN (’SAN JOSE’, ’SAN FRANCISCO’)
AND DIVISION = ’MARKETING’);
Example: The following subquery can be transformed into a join because it meets
the second set of conditions for transformation:
UPDATE T1 SET T1.C1 = 1
WHERE T1.C1 =ANY
(SELECT T2.C1 FROM T2
WHERE T2.C2 = T1.C2);
Results from EXPLAIN: For information about the result in a plan table for a
subquery that is transformed into a join operation, see “Is a subquery transformed
into a join?” on page 745.
Subquery tuning
The following three queries all retrieve the same rows. All three retrieve data about
all designers in departments that are responsible for projects that are part of major
project MA2100. These three queries show that there are several ways to retrieve a
desired result.
If you need columns from both tables EMP and PROJ in the output, you must use a
join.
In general, query A might be the one that performs best. However, if there is no
index on DEPTNO in table PROJ, then query C might perform best. The
IN-subquery predicate in query C is indexable. Therefore, if an index on
WORKDEPT exists, DB2 might do IN-list access on table EMP. If you decide that a
join cannot be used and there is an available index on DEPTNO in table PROJ,
then query B might perform best.
When looking at a problem subquery, see if the query can be rewritten into another
format or see if there is an index that you can create to help improve the
performance of the subquery.
| The following example demonstrates how you can use a partitioning index to enable
| a limited partition scan on a set of partitions that DB2 needs to examine to satisfy a
| query predicate.
| Suppose that you create table Q1, with partitioning index DATE_IX and DPSI
| STATE_IX:
| CREATE TABLESPACE TS1 NUMPARTS 3;
|
| CREATE TABLE Q1 (DATE DATE,
| CUSTNO CHAR(5),
| STATE CHAR(2),
| PURCH_AMT DECIMAL(9,2))
| IN TS1
| PARTITION BY (DATE)
| (PARTITION 1 ENDING AT (’2002-1-31’),
| PARTITION 2 ENDING AT (’2002-2-28’),
| PARTITION 3 ENDING AT (’2002-3-31’));
|
| Now suppose that you want to execute the following query against table Q1:
| SELECT CUSTNO, PURCH_AMT
| FROM Q1
| WHERE STATE = ’CA’;
| Because the predicate is based only on values of a DPSI key (STATE), DB2 must
| examine all partitions to find the matching rows.
| Now suppose that you modify the query in the following way:
| SELECT CUSTNO, PURCH_AMT
| FROM Q1
| WHERE DATE BETWEEN ’2002-01-01’ AND ’2002-01-31’ AND
| STATE = ’CA’;
| Because the predicate is now based on values of a partitioning index key (DATE)
| and on values of a DPSI key (STATE), DB2 can eliminate the scanning of data
| partitions 2 and 3, which do not satisfy the query for the partitioning key. This can
| be determined at bind time because the columns of the predicate are compared to
| constants.
| Now suppose that you use host variables instead of constants in the same query:
| SELECT CUSTNO, PURCH_AMT
| FROM Q1
| WHERE DATE BETWEEN :hv1 AND :hv2 AND
| STATE = :hv3;
| DB2 can use the predicate on the partitioning column to eliminate the scanning of
| unneeded partitions at run time.
| Writing queries to take advantage of limited partition scan is especially useful when
| a correlation exists between columns that are in a partitioning index and columns
| that are in a DPSI.
| For example, suppose that you create table Q2, with partitioning index DATE_IX
| and DPSI ORDERNO_IX:
| CREATE TABLESPACE TS2 NUMPARTS 3;
|
| CREATE TABLE Q2 (DATE DATE,
| ORDERNO CHAR(8),
| STATE CHAR(2),
| PURCH_AMT DECIMAL(9,2))
| IN TS2
| PARTITION BY (DATE)
| (PARTITION 1 ENDING AT (’2000-12-31’),
| PARTITION 2 ENDING AT (’2001-12-31’),
| PARTITION 3 ENDING AT (’2002-12-31’));
|
| CREATE INDEX DATE_IX ON Q2 (DATE) PARTITIONED CLUSTER;
|
| CREATE INDEX ORDERNO_IX ON Q2 (ORDERNO) PARTITIONED;
| Also suppose that the first 4 bytes of each ORDERNO column value represent the
| four-digit year in which the order is placed. This means that the DATE column and
| the ORDERNO column are correlated.
Important
This section describes tactics for rewriting queries and modifying catalog
statistics to influence how DB2 selects access paths. The access path
selection "tricks"that are described in the section might cause significant
performance degradation if they are not carefully implemented and monitored.
For example, the selection method might change in a later release of DB2,
causing your changes to degrade performance. Save the old catalog statistics
or SQL before you consider making any changes to control the choice of
access path. Before and after you make any changes, take performance
measurements. When you migrate to a new release, evaluate the performance
again. Be prepared to back out any changes that have degraded performance.
This section contains the following information about determining and changing
access paths:
v Obtaining information about access paths
v “Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS” on page
714
v “Fetching a limited number of rows: FETCH FIRST n ROWS ONLY” on page 714
v “Using the CARDINALITY clause to improve the performance of queries with
user-defined table function references” on page 717
v “Reducing the number of matching columns” on page 718
v “Rearranging the order of tables in a FROM clause” on page 723
v “Updating catalog statistics” on page 723
v “Using a subsystem parameter” on page 724
Example: Suppose that you write an application that requires information on only
the 20 employees with the highest salaries. To return only the rows of the employee
table for those 20 employees, you can write a query like this:
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY
FROM EMP
ORDER BY SALARY DESC
FETCH FIRST 20 ROWS ONLY;
| When both the FETCH FIRST n ROWS ONLY clause and the OPTIMIZE FOR n
| ROWS clause are specified, the value for the OPTIMIZE FOR n ROWS clause is
| used for access path selection.
| The OPTIMIZE FOR value of 20 rows is used for access path selection.
This section discusses the use of OPTIMIZE FOR n ROWS to affect the
performance of interactive SQL applications. Unless otherwise noted, this
information pertains to local applications. For more information on using OPTIMIZE
FOR n ROWS in distributed applications, see “Limiting the number of DRDA
network transmissions” on page 442.
What OPTIMIZE FOR n ROWS does: The OPTIMIZE FOR n ROWS clause lets an
application declare its intent to do either of these things:
v Retrieve only a subset of the result set
v Give priority to the retrieval of the first few rows
DB2 uses the OPTIMIZE FOR n ROWS clause to choose access paths that
minimize the response time for retrieving the first few rows. For distributed queries,
the value of n determines the number of rows that DB2 sends to the client on each
DRDA network transmission. See “Limiting the number of DRDA network
transmissions” on page 442 for more information on using OPTIMIZE FOR n
ROWS in the distributed environment.
Use OPTIMIZE FOR 1 ROW to avoid sorts: You can influence the access path
most by using OPTIMIZE FOR 1 ROW. OPTIMIZE FOR 1 ROW tells DB2 to select
an access path that returns the first qualifying row quickly. This means that
whenever possible, DB2 avoids any access path that involves a sort. If you specify
a value for n that is anything but 1, DB2 chooses an access path based on cost,
and you won’t necessarily avoid sorts.
How to specify OPTIMIZE FOR n ROWS for a CLI application: For a Call Level
Interface (CLI) application, you can specify that DB2 uses OPTIMIZE FOR n ROWS
for all queries. To do that, specify the keyword OPTIMIZEFORNROWS in the
initialization file. For more information, see Chapter 3 of DB2 ODBC Guide and
Reference.
How many rows you can retrieve with OPTIMIZE FOR n ROWS: The OPTIMIZE
FOR n ROWS clause does not prevent you from retrieving all the qualifying rows.
However, if you use OPTIMIZE FOR n ROWS, the total elapsed time to retrieve all
the qualifying rows might be significantly greater than if DB2 had optimized for the
entire result set.
If you add the OPTIMIZE FOR n ROWS clause to the statement, DB2 will probably
use the SALARY index directly because you have indicated that you expect to
retrieve the salaries of only the 20 most highly paid employees.
Example: The following statement uses that strategy to avoid a costly sort
operation:
SELECT LASTNAME,FIRSTNAME,EMPNO,SALARY
FROM EMP
ORDER BY SALARY DESC
OPTIMIZE FOR 20 ROWS;
When you specify OPTIMIZE FOR n ROWS for a remote query, a small value of n
can help limit the number of rows that flow across the network on any given
transmission.
You can improve the performance for receiving a large result set through a remote
query by specifying a large value of n in OPTIMIZE FOR n ROWS. When you
specify a large value, DB2 attempts to send the n rows in multiple transmissions.
For better performance when retrieving a large result set, in addition to specifying
OPTIMIZE FOR n ROWS with a large value of n in your query, do not execute
other SQL statements until the entire result set for the query is processed. If
For local or remote queries, to influence the access path most, specify OPTIMIZE
for 1 ROW. This value does not have a detrimental effect on distributed queries.
| To minimize contention among applications that access tables with this design,
| specify the VOLATILE keyword when you create or alter the tables. A table that is
| defined with the VOLATILE keyword is known as a volatile table. When DB2
| executes queries that include volatile tables, DB2 uses index access whenever
| possible. As well as minimizing contention, using index access preserves the
| access sequence that the primary key provides.
| You can specify a cardinality value for a user-defined table function by using the
| CARDINALITY clause of the SQL CREATE FUNCTION or ALTER FUNCTION
| statement. However, this value applies to all invocations of the function, whereas a
| user-defined table function might return different numbers of rows, depending on
| the query in which it is referenced.
| To give DB2 a better estimate of the cardinality of a user-defined table function for a
| particular query, you can use the CARDINALITY or CARDINALITY MULTIPLIER
| clause in that query. DB2 uses those clauses at bind time when it calculates the
| access cost of the user-defined table function. Using this clause is recommended
| only for programs that run on DB2 UDB for z/OS because the clause is not
| supported on earlier versions of DB2.
| Add the CARDINALITY 30 clause to tell DB2 that, for this query, TUDF1 should
| return 30 rows:
| SELECT *
| FROM TABLE(TUDF1(3) CARDINALITY 30) AS X;
| Add the CARDINALITY MULTIPLIER 30 clause to tell DB2 that, for this query,
| TUDF1 should return 5*30, or 150, rows:
| SELECT *
| FROM TABLE(TUDF2(10) CARDINALITY MULTIPLIER 30) AS X;
Q1:
SELECT * FROM PART_HISTORY -- SELECT ALL PARTS
WHERE PART_TYPE = ’BB’ P1 -- THAT ARE ’BB’ TYPES
AND W_FROM = 3 P2 -- THAT WERE MADE IN CENTER 3
AND W_NOW = 3 P3 -- AND ARE STILL IN CENTER 3
+------------------------------------------------------------------------------+
| Filter factor of these predicates. |
| P1 = 1/1000= .001 |
| P2 = 1/50 = .02 |
| P3 = 1/50 = .02 |
|------------------------------------------------------------------------------|
| ESTIMATED VALUES | WHAT REALLY HAPPENS |
| filter data | filter data |
| index matchcols factor rows | index matchcols factor rows |
| ix2 2 .02*.02 40 | ix2 2 .02*.50 1000 |
| ix1 1 .001 100 | ix1 1 .001 100 |
+------------------------------------------------------------------------------+
DB2 picks IX2 to access the data, but IX1 would be roughly 10 times quicker. The
problem is that 50% of all parts from center number 3 are still in Center 3; they
have not moved. Assume that there are no statistics on the correlated columns in
catalog table SYSCOLDIST. Therefore, DB2 assumes that the parts from center
number 3 are evenly distributed among the 50 centers.
You can get the desired access path by changing the query. To discourage the use
of IX2 for this particular query, you can change the third predicate to be
nonindexable.
Now index I2 is not picked, because it has only one match column. The preferred
index, I1, is picked. The third predicate is a nonindexable predicate, so an index is
not used for the compound predicate.
You can make a predicate nonindexable in many ways. The recommended way is
to add 0 to a predicate that evaluates to a numeric value or to concatenate an
empty string to a predicate that evaluates to a character value.
Indexable Nonindexable
These techniques do not affect the result of the query and cause only a small
amount of overhead.
The preferred technique for improving the access path when a table has correlated
columns is to generate catalog statistics on the correlated columns. You can do that
either by running RUNSTATS or by updating catalog table SYSCOLDIST manually.
To access the data in a star schema design, you often write SELECT statements
that include join operations between the fact table and the dimension tables, but no
join operations between dimension tables. These types of queries are known as
star join queries.
For a star join query, DB2 uses a special join type called a star join if the following
conditions are true:
v The tables meet the conditions that are specified in “Star join (JOIN_TYPE=’S’)”
on page 760.
v The STARJOIN system parameter is set to ENABLE, and the number of tables in
the query block is greater than or equal to the minimum number that is specified
in the SJTABLES system parameter.
See “Star join (JOIN_TYPE=’S’)” on page 760 for detailed discussions of these
system parameters.
This section gives suggestions for choosing indexes might improve star join query
performance.
Follow these steps to derive a fact table index for a star join query that joins n
columns of fact table F to n dimension tables D1 through Dn:
1. Define the set of columns whose index key order is to be determined as the n
columns of fact table F that correspond to dimension tables. That is,
S={C1,...Cn} and L=n.
2. Calculate the density of all sets of L-1 columns in S.
3. Find the lowest density. Determine which column is not in the set of columns
with the lowest density. That is, find column Cm in S, such that for every Ci in
S, density(S-{Cm})<density(S-{Ci}).
4. Make Cm the Lth column of the index.
5. Remove Cm from S.
6. Decrement L by 1.
7. Repeat steps 2 through 6 n-2 times. The remaining column after iteration n-2 is
the first column of the index.
Example of determining column order for a fact table index: Suppose that a
star schema has three dimension tables with the following cardinalities:
cardD1=2000
cardD2=500
cardD3=100
Now suppose that the cardinalities of single columns and pairs of columns in the
fact table are:
cardC1=2000
cardC2=433
cardC3=100
cardC12=625000
cardC13=196000
cardC23=994
Step 1: Calculate the density of all pairs of columns in the fact table:
density(C1,C2)=625000⁄(2000*500)=0.625
density(C1,C3)=196000⁄(2000*100)=0.98
density(C2,C3)=994⁄(500*100)=0.01988
Step 2: Find the pair of columns with the lowest density. That pair is (C2,C3).
Determine which column of the fact table is not in that pair. That column is C1.
Step 4: Repeat steps 1 through 3 to determine the second and first columns of the
index key:
density(C2)=433⁄500=0.866
density(C3)=100⁄100=1.0
The column with the lowest density is C2. Therefore, C3 is the second column of
the index. The remaining column, C2, is the first column of the index. That is, the
best order for the multi-column index is C2, C3, C1.
| If you update catalog statistics for a table space or index manually, and you are
| using dynamic statement caching, you need to invalidate statements in the cache
| that involve those table spaces or indexes. To invalidate statements in the dynamic
| statement cache without updating catalog statistics or generating reports, you can
| run the RUNSTATS utility with the REPORT NO and UPDATE NONE options on the
| table space or the index that the query is dependent on.
The example shown in Figure 206 on page 719, involves this query:
SELECT * FROM PART_HISTORY -- SELECT ALL PARTS
WHERE PART_TYPE = ’BB’ P1 -- THAT ARE ’BB’ TYPES
AND W_FROM = 3 P2 -- THAT WERE MADE IN CENTER 3
AND W_NOW = 3 P3 -- AND ARE STILL IN CENTER 3
This query has a problem with data correlation. DB2 does not know that 50% of the
parts that were made in Center 3 are still in Center 3. The problem was
circumvented by making a predicate nonindexable. But suppose that hundreds of
users are writing queries similar to that query. Having all users change their queries
would be impossible. In this type of situation, the best solution is to change the
catalog statistics.
For the query in Figure 206 on page 719, you can update the catalog statistics in
one of two ways:
v Run the RUNSTATS utility, and request statistics on the correlated columns
W_FROM and W_NOW. This is the preferred method. See the discussion of
maintaining statistics in the catalog in Part 5 (Volume 2) of DB2 Administration
Guide and Part 2 of DB2 Utility Guide and Reference for more information.
v Update the catalog statistics manually.
Updating the catalog to adjust for correlated columns: One catalog table that
| you can update is SYSIBM.SYSCOLDIST, which gives information about a column
| or set of columns in a table. Assume that because columns W_NOW and W_FROM
are correlated, only 100 distinct values exist for the combination of the two columns,
rather than 2500 (50 for W_FROM * 50 for W_NOW). Insert a row like this to
indicate the new cardinality:
You can also use the RUNSTATS utility to put this information in SYSCOLDIST. See
DB2 Utility Guide and Reference for more information.
You tell DB2 about the frequency of a certain combination of column values by
updating SYSIBM.SYSCOLDIST. For example, you can indicate that 1% of the rows
in PART_HISTORY contain the values 3 for W_FROM and 3 for W_NOW by
inserting this row into SYSCOLDIST:
INSERT INTO SYSIBM.SYSCOLDIST
(FREQUENCY, FREQUENCYF, STATSTIME, IBMREQD,
TBOWNER, TBNAME, NAME, COLVALUE,
TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS)
VALUES(0, .0100, ’1996-12-01-12.00.00.000000’,’N’,
’USRT001’,’PART_HISTORY’,’W_FROM’,X’00800000030080000003’,
’F’,-1,X’00040003’,2);
Updating the catalog for joins with table functions: Updating catalog statistics
might cause extreme performance problems if the statistics are not updated
correctly. Monitor performance, and be prepared to reset the statistics to their
original values if performance problems arise.
The best solution to the problem is to run RUNSTATS again after the table is
populated. However, if you cannot do that, you can use subsystem parameter
NPGTHRSH to cause DB2 to favor matching index access over a table space scan
and over nonmatching index access.
The value of NPGTHRSH is an integer that indicates the tables for which DB2
favors matching index access. Values of NPGTHRSH and their meanings are:
−1 DB2 favors matching index access for all tables.
0 DB2 selects the access path based on cost, and no tables qualify
for special handling. This is the default.
n>=1 If data access statistics have been collected for all tables, DB2
| When you enable the INLISTP parameter, you enable two primary means of
| optimizing some queries that contain IN-list predicates:
| v The IN-list predicate is pushed down from the parent query block into the
| materialized table expression.
| v A correlated IN-list predicate in a subquery that is generated by transitive closure
| is moved up to the parent query block.
Other tools: The following tools can help you tune SQL queries:
v DB2 Visual Explain
Visual Explain is a graphical workstation feature of DB2 that provides:
– An easy-to-understand display of a selected access path
– Suggestions for changing an SQL statement
– An ability to invoke EXPLAIN for dynamic SQL statements
– An ability to provide DB2 catalog statistics for referenced objects of an access
path
– A subsystem parameter browser with keyword 'Find' capabilities
For information about using DB2 Visual Explain, which is a separately packaged
CD-ROM provided with your DB2 UDB for z/OS Version 8 license, see DB2
Visual Explain online help.
v DB2 Performance Expert
DB2 Performance Expert is a performance monitoring tool that formats
performance data. DB2 Performance Expert combines information from EXPLAIN
and from the DB2 catalog. It displays access paths, indexes, tables, table
spaces, plans, packages, DBRMs, host variable definitions, ordering, table
access and join sequences, and lock types. Output is presented in a dialog
rather than as a table, making the information easy to read and understand. DB2
Performance Monitor (DB2 PM) performs some of the functions of DB2
Performance Expert.
v DB2 Estimator
DB2 Estimator for Windows is an easy-to-use, stand-alone tool for estimating the
performance of DB2 UDB for z/OS applications. You can use it to predict the
performance and cost of running the applications, transactions, SQL statements,
triggers, and utilities. For instance, you can use DB2 Estimator for estimating the
impact of adding or dropping an index from a table, estimating the change in
For each access to a single table, EXPLAIN tells you if an index access or table
space scan is used. If indexes are used, EXPLAIN tells you how many indexes and
index columns are used and what I/O methods are used to read the pages. For
joins of tables, EXPLAIN tells you which join method and type are used, the order
in which DB2 joins the tables, and when and why it sorts any rows.
The primary use of EXPLAIN is to observe the access paths for the SELECT parts
of your statements. For UPDATE and DELETE WHERE CURRENT OF, and for
INSERT, you receive somewhat less information in your plan table. And some
accesses EXPLAIN does not describe: for example, the access to LOB values,
which are stored separately from the base table, and access to parent or dependent
tables needed to enforce referential constraints.
The access paths shown for the example queries in this chapter are intended only
to illustrate those examples. If you execute the queries in this chapter on your
system, the access paths chosen can be different.
Creating PLAN_TABLE
| Before you can use EXPLAIN, a PLAN_TABLE must be created to hold the results
| of EXPLAIN. A copy of the statements that are needed to create the table are in the
| Figure 207 shows most current format of a plan table, which consists of 58
| columns. Table 99 on page 730 shows the content of each column.
| Your plan table can use many other formats with fewer columns, as shown in
| Figure 208 on page 730. However, use the 58-column format because it gives you
| the most information. If you alter an existing plan table with fewer than 58 columns
| to the 58-column format:
| v If they exist, change the data type of columns: PROGNAME, CREATOR,
| TNAME, ACCESSTYPE, ACCESSNAME, REMARKS, COLLID,
| CORRELATION_NAME, IBM_SERVICE_DATA, OPTHINT, and HINT_USED. Use
| the values shown in Figure 207.
| v Add the missing columns to the table. Use the column definitions shown in
| Figure 207. For most columns added, specify NOT NULL WITH DEFAULT so that
| default values are included for the rows in the table. However, as the figure
| shows, certain columns do allow nulls. Do not specify those columns as NOT
| NULL WITH DEFAULT.
When the values of QUERYNO are based on the statement number in the source
program, values greater than 32767 are reported as 0. However, in a very long
program, the value is not guaranteed to be unique. If QUERYNO is not unique, the
value of TIMESTAMP is unique.
| QBLOCKNO A number that identifies each query block within a query. The value of the numbers
| are not in any particular order, nor are they necessarily consecutive.
APPLNAME The name of the application plan for the row. Applies only to embedded EXPLAIN
statements executed from a plan or to statements explained when binding a plan.
Blank if not applicable.
PROGNAME The name of the program or package containing the statement being explained.
Applies only to embedded EXPLAIN statements and to statements explained as the
result of binding a plan or package. Blank if not applicable.
PLANNO The number of the step in which the query indicated in QBLOCKNO was processed.
This column indicates the order in which the steps were executed.
The data in this column is right justified. For example, IX appears as a blank followed
by I followed by X. If the column contains a blank, then no lock is acquired.
TIMESTAMP Usually, the time at which the row is processed, to the last .01 second. If necessary,
DB2 adds .01 second to the value to ensure that rows for two successive queries
have different values.
REMARKS A field into which you can insert any character string of 762 or fewer characters.
| PREFETCH Whether data pages are to be read in advance by prefetch. S = pure sequential
| prefetch; L = prefetch through a page list; D = optimizer expects dynamic prefetch;
| blank = unknown or no prefetch.
COLUMN_FN_EVAL When an SQL aggregate function is evaluated. R = while the data is being read from
the table or index; S = while performing a sort to satisfy a GROUP BY clause; blank =
after data retrieval and after any sorts.
MIXOPSEQ The sequence number of a step in a multiple index operation.
1, 2, ... n For the steps of the multiple index procedure (ACCESSTYPE is MX,
MI, or MU.)
0 For any other rows (ACCESSTYPE is I, I1, M, N, R, or blank.)
VERSION The version identifier for the package. Applies only to an embedded EXPLAIN
statement executed from a package or to a statement that is explained when binding a
package. Blank if not applicable.
| COLLID The collection ID for the package. Applies only to an embedded EXPLAIN statement
| that is executed from a package or to a statement that is explained when binding a
| package. Blank if not applicable. The value DSNDYNAMICSQLCACHE indicates that
| the row is for a cached statement.
Note: The following nine columns, from ACCESS_DEGREE through CORRELATION_NAME, contain the null value if
the plan or package was bound using a plan table with fewer than 43 columns. Otherwise, each of them can contain
null if the method it refers to does not apply.
ACCESS_DEGREE The number of parallel tasks or operations activated by a query. This value is
determined at bind time; the actual number of parallel operations used at execution
time could be different. This column contains 0 if there is a host variable.
ACCESS_PGROUP_ID The identifier of the parallel group for accessing the new table. A parallel group is a
set of consecutive operations, executed in parallel, that have the same number of
parallel tasks. This value is determined at bind time; it could change at execution time.
| M is the value of the column when the table contains muliple CCSID set, the value of
| the column is M.
For tips on maintaining a growing plan table, see “Maintaining a plan table” on page
736.
| If the plan owner or the package owner has an alias on a PLAN_TABLE that was
| created by another owner, other_owner.PLAN_TABLE is populated instead of
| package_owner.PLAN_TABLE or plan_owner.PLAN_TABLE.
EXPLAIN for remote binds: A remote requester that accesses DB2 can specify
EXPLAIN(YES) when binding a package at the DB2 server. The information
appears in a plan table at the server, not at the requester. If the requester does not
support the propagation of the option EXPLAIN(YES), rebind the package at the
requester with that option to obtain access path information. You cannot get
information about access paths for SQL statements that use private protocol.
All rows with the same non-zero value for QBLOCKNO and the same value for
QUERYNO relate to a step within the query. QBLOCKNOs are not necessarily
executed in the order shown in PLAN_TABLE. But within a QBLOCKNO, the
PLANNO column gives the substeps in the order they execute.
For each substep, the TNAME column identifies the table accessed. Sorts can be
shown as part of a table access or as a separate step.
What if QUERYNO=0? For entries that contain QUERYNO=0, use the timestamp,
which is guaranteed to be unique, to distinguish individual statements.
COLLID gives the COLLECTION name, and PROGNAME gives the PACKAGE_ID.
The following query to a plan table return the rows for all the explainable
statements in a package in their logical order:
SELECT * FROM JOE.PLAN_TABLE
WHERE PROGNAME = ’PACK1’ AND COLLID = ’COLL1’ AND VERSION = ’PROD1’
ORDER BY QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ;
Both of the following examples have these indexes: IX1 on T(C1) and IX2 on T(C2).
In this case, the same index can be used more than once in a multiple index
access because more than one predicate could be matching. DB2 processes the
query by performing the following steps:
1. DB2 retrieves all RIDs where C1 is between 100 and 199, using index IX1.
2. DB2 retrieves all RIDs where C1 is between 500 and 599, again using IX1. The
union of those lists is the final set of RIDs.
3. DB2 retrieves the qualified rows by using the final RID list.
In general, the matching predicates on the leading index columns are equal or IN
predicates. The predicate that matches the final index column can be an equal, IN,
NOT NULL, or range predicate (<, <=, >, >=, LIKE, or BETWEEN).
Index-only access to data is not possible for any step that uses list prefetch, which
is described under “What kind of prefetching is expected? (PREFETCH = L, S, D,
| or blank)” on page 744. Index-only access is not possible for padded indexes when
| varying-length data is returned or a VARCHAR column has a LIKE predicate, unless
| the VARCHAR FROM INDEX field of installation panel DSNTIP4 is set to YES and
| plan or packages have been rebound to pick up the change. See Part 2 of DB2
| Installation Guide for more information. Index-only access is always possible for
| nonpadded indexes.
If access is by more than one index, INDEXONLY is Y for a step with access type
MX, because the data pages are not actually accessed until all the steps for
intersection (MI) or union (MU) take place.
When an SQL application uses index-only access for a ROWID column, the
application claims the table space or table space partition. As a result, contention
may occur between the SQL application and a utility that drains the table space or
partition. Index-only access to a table for a ROWID column is not possible if the
associated table space or partition is in an incompatible restrictive state. For
example, an SQL application can make a read claim on the table space only if the
restrictive state allows readers.
Direct row access is very fast, because DB2 does not need to use the index or a
table space scan to find the row. Direct row access can be used on any table that
has a ROWID column.
To use direct row access, you first select the values of a row into host variables.
The value that is selected from the ROWID column contains the location of that
row. Later, when you perform queries that access that row, you include the row ID
value in the search condition. If DB2 determines that it can use direct row access, it
uses the row ID value to navigate directly to the row. See “Example: Coding with
row IDs for direct row access” on page 741 for a coding example.
Searching for propagated rows: If rows are propagated from one table to another,
do not expect to use the same row ID value from the source table to search for the
same row in the target table, or vice versa. This does not work when direct row
access is the access path chosen.
Example: Assume that the host variable in the following statement contains a row
ID from SOURCE:
SELECT * FROM TARGET
WHERE ID = :hv_rowid
Because the row ID location is not the same as in the source table, DB2 will
probably not find that row. Search on another column to retrieve the row you want.
Reverting to ACCESSTYPE
Although DB2 might plan to use direct row access, circumstances can cause DB2
to not use direct row access at run time. DB2 remembers the location of the row as
of the time it is accessed. However, that row can change locations (such as after a
REORG) between the first and second time it is accessed, which means that DB2
cannot use direct row access to find the row on the second access attempt. Instead
of using direct row access, DB2 uses the access path that is shown in the
ACCESSTYPE column of PLAN_TABLE.
If the predicate you are using to do direct row access is not indexable and if DB2 is
unable to use direct row access, then DB2 uses a table space scan to find the row.
This can have a profound impact on the performance of applications that rely on
direct row access. Write your applications to handle the possibility that direct row
access might not be used. Some options are to:
v Ensure that your application does not try to remember ROWID columns across
reorganizations of the table space.
When your application commits, it releases its claim on the table space; it is
possible that a REORG can run and move the row, which disables direct row
access. Plan your commit processing accordingly; use the returned row ID value
before committing, or re-select the row ID value after a commit is issued.
If you are storing ROWID columns from another table, update those values after
the table with the ROWID column is reorganized.
If an index exists on EMPNO, DB2 can use index access if direct access fails.
The additional predicate ensures DB2 does not revert to a table space scan.
RID list processing: Direct row access and RID list processing are mutually
exclusive. If a query qualifies for both direct row access and RID list processing,
direct row access is used. If direct row access fails, DB2 does not revert to RID list
processing; instead it reverts to the backup access type.
/**********************************************************/
/* Retrieve the picture and resume from the PIC_RES table */
/**********************************************************/
strcpy(hv_name, "Jones");
EXEC SQL SELECT PR.PICTURE, PR.RESUME INTO :hv_picture, :hv_resume
FROM PIC_RES PR
WHERE PR.Name = :hv_name;
/**********************************************************/
/* Insert a row into the EMPDATA table that contains the */
/* picture and resume you obtained from the PIC_RES table */
/**********************************************************/
EXEC SQL INSERT INTO EMPDATA
VALUES (DEFAULT,9999,’Jones’, 35000.00, 99,
:hv_picture, :hv_resume);
/**********************************************************/
/* Now retrieve some information about that row, */
/* including the ROWID value. */
/**********************************************************/
hv_dept = 99;
EXEC SQL SELECT E.SALARY, E.EMP_ROWID
INTO :hv_salary, :hv_emp_rowid
FROM EMPDATA E
WHERE E.DEPTNUM = :hv_dept AND E.NAME = :hv_name;
Figure 209. Example of using a row ID value for direct row access (Part 1 of 2)
/**********************************************************/
/* Use the ROWID value to obtain the employee ID from the */
/* same record. */
/**********************************************************/
EXEC SQL SELECT E.ID INTO :hv_id
FROM EMPDATA E
WHERE E.EMP_ROWID = :hv_emp_rowid;
/**********************************************************/
/* Use the ROWID value to delete the employee record */
/* from the table. */
/**********************************************************/
EXEC SQL DELETE FROM EMPDATA
WHERE EMP_ROWID = :hv_emp_rowid;
Figure 209. Example of using a row ID value for direct row access (Part 2 of 2)
A limited partition scan can be combined with other access methods. For example,
consider the following query:
SELECT .. FROM T
WHERE (C1 BETWEEN ’2002’ AND ’3280’
OR C1 BETWEEN ’6000’ AND ’8000’)
AND C2 = ’6’;
Assume that table T has a partitioned index on column C1 and that values of C1
between 2002 and 3280 all appear in partitions 3 and 4 and the values between
Generally, values of R and S are considered better for performance than a blank.
Use variance and standard deviation with care: The VARIANCE and STDDEV
functions are always evaluated late (that is, COLUMN_FN_EVAL is blank). This
causes other functions in the same query block to be evaluated late as well. For
example, in the following query, the sum function is evaluated later than it would be
if the variance function was not present:
SELECT SUM(C1), VARIANCE(C1) FROM T1;
| Table 102 shows the corresponding plan table for the WHEN clause.
| Table 102. Plan table for the WHEN clause
| QBLOCKNO PLANNO TABLE ACCESSTYPE QBLOCK_TYPE PARENT_QBLOCK
| 1 1 TRIGGR 0
| 2 1 NT R NCOSUB 1
|
|
Interpreting access to a single table
The following sections describe different access paths that values in a plan table
can indicate, along with suggestions for supplying better access paths for DB2 to
choose from:
v Table space scans (ACCESSTYPE=R PREFETCH=S)
v “Index access paths” on page 747
v “UPDATE using an index” on page 752
Assume that table T has no index on C1. The following is an example that uses a
table space scan:
SELECT * FROM T WHERE C1 = VALUE;
In this case, at least every row in T must be examined to determine whether the
value of C1 matches the given value.
If you do not want to use sequential prefetch for a particular query, consider adding
to it the clause OPTIMIZE FOR 1 ROW.
In the general case, the rules for determining the number of matching columns are
simple, although there are a few exceptions.
v Look at the index columns from leading to trailing. For each index column,
search for an indexable boolean term predicate on that column. (See “Properties
of predicates” on page 675 for a definition of boolean term.) If such a predicate is
found, then it can be used as a matching predicate.
Column MATCHCOLS in a plan table shows how many of the index columns are
matched by predicates.
v If no matching predicate is found for a column, the search for matching
predicates stops.
v If a matching predicate is a range predicate, then there can be no more matching
columns. For example, in the matching index scan example that follows, the
range predicate C2>1 prevents the search for additional matching columns.
v For star joins, a missing key predicate does not cause termination of matching
columns that are to be used on the fact table index.
Two matching columns occur in this example. The first one comes from the
predicate C1=1, and the second one comes from C2>1. The range predicate on C2
prevents C3 from becoming a matching column.
Index screening
In index screening, predicates are specified on index key columns but are not part
of the matching columns. Those predicates improve the index access by reducing
the number of rows that qualify while searching the index. For example, with an
index on T(C1,C2,C3,C4) in the following SQL statement, C3>0 and C4=2 are index
screening predicates.
SELECT * FROM T
WHERE C1 = 1
AND C3 > 0 AND C4 = 2
AND C5 = 8;
You can regard the IN-list index scan as a series of matching index scans with the
values in the IN predicate being used for each matching index scan. The following
example has an index on (C1,C2,C3,C4) and might use an IN-list index scan:
SELECT * FROM T
WHERE C1=1 AND C2 IN (1,2,3)
AND C3>0 AND C4<100;
The plan table shows MATCHCOLS = 3 and ACCESSTYPE = N. The IN-list scan is
performed as the following three matching index scans:
(C1=1,C2=1,C3>0), (C1=1,C2=2,C3>0), (C1=1,C2=3,C3>0)
Parallelism is supported for queries that involve IN-list index access. These queries
used to run sequentially in previous releases of DB2, although parallelism could
have been used when the IN-list access was for the inner table of a parallel group.
Now, in environments in which parallelism is enabled, you can see a reduction in
elapsed time for queries that involve IN-list index access for the outer table of a
parallel group.
RID lists are constructed for each of the indexes involved. The unions or
intersections of the RID lists produce a final list of qualified RIDs that is used to
retrieve the result rows, using list prefetch. You can consider multiple index access
The plan table contains a sequence of rows describing the access. For this query,
ACCESSTYPE uses the following values:
Value Meaning
M Start of multiple index access processing
MX Indexes are to be scanned for later union or intersection
MI An intersection (AND) is performed
MU A union (OR) is performed
The following steps relate to the previous query and the values shown for the plan
table in Table 104:
1. Index EMPX1, with matching predicate AGE= 34, provides a set of candidates
for the result of the query. The value of MIXOPSEQ is 1.
2. Index EMPX1, with matching predicate AGE = 40, also provides a set of
candidates for the result of the query. The value of MIXOPSEQ is 2.
3. Index EMPX2, with matching predicate JOB=’MANAGER’, also provides a set of
candidates for the result of the query. The value of MIXOPSEQ is 3.
4. The first intersection (AND) is done, and the value of MIXOPSEQ is 4. This MI
removes the two previous candidate lists (produced by MIXOPSEQs 2 and 3)
by intersecting them to form an intermediate candidate list, IR1, which is not
shown in PLAN_TABLE.
5. The last step, where the value MIXOPSEQ is 5, is a union (OR) of the two
remaining candidate lists, which are IR1 and the candidate list produced by
MIXOPSEQ 1. This final union gives the result for the query.
Table 104. Plan table output for a query that uses multiple indexes. Depending on the filter
factors of the predicates, the access steps can appear in a different order.
PLAN- ACCESS- MATCH- ACCESS- MIXOP-
NO TNAME TYPE COLS NAME PREFETCH SEQ
1 EMP M 0 L 0
1 EMP MX 1 EMPX1 1
1 EMP MX 1 EMPX1 2
1 EMP MI 0 3
1 EMP MX 1 EMPX2 4
1 EMP MU 0 5
In this example, the steps in the multiple index access follow the physical sequence
of the predicates in the query. This is not always the case. The multiple index steps
are arranged in an order that uses RID pool storage most efficiently and for the
least amount of time.
With an index on T(C1,C2), the following queries can use index-only access:
SELECT C1, C2 FROM T WHERE C1 > 0;
SELECT C1, C2 FROM T;
SELECT COUNT(*) FROM T WHERE C1 = 1;
Sometimes DB2 can determine that an index that is not fully matching is actually an
equal unique index case. Assume the following case:
SELECT C3 FROM T
WHERE C1 = 1 AND C2 = 5;
Index1 is a fully matching equal unique index. However, Index2 is also an equal
unique index even though it is not fully matching. Index2 is the better choice
because, in addition to being equal and unique, it also provides index-only access.
To use a matching index scan to update an index in which its key columns are
being updated, the following conditions must be met:
v Each updated key column must have a corresponding predicate of the form
″index_key_column = constant″ or ″index_key_column IS NULL″.
v If a view is involved, WITH CHECK OPTION must not be specified.
| For updates that do not involve dynamic scrollable cursors, DB2 can use list
| prefetch, multiple index access, or IN-list access. With list prefetch or multiple index
access, any index or indexes can be used in an UPDATE operation. Of course, to
be chosen, those access paths must provide efficient access to the data.
| A positioned update that uses a dynamic scrollable cursor cannot use an access
| path with list prefetch, or multiple index access. This means that indexes that do not
| meet the preceding criteria cannot be used to locate the rows to be updated.
This section begins with “Definitions and examples of join operations” and continues
with descriptions of the methods of joining that can be indicated in a plan table:
v “Nested loop join (METHOD=1)” on page 755
v “Merge scan join (METHOD=2)” on page 757
v “Hybrid join (METHOD=4)” on page 758
v “Star join (JOIN_TYPE=’S’)” on page 760
The new table (or inner table) in a join operation is the table that is newly accessed
in the step.
A join operation can involve more than two tables. In these cases, the operation is
carried out in a series of steps. For non-star joins, each step joins only two tables.
(Method 1)
Nested
Composite TJ loop TK New
join
(Method 2)
Composite Work Merge scan TL New
File join
(Sort)
Result
Table 105 and Table 106 show a subset of columns in a plan table for this join
operation.
Table 105. Subset of columns for a two-step join operation
ACCESS- MATCH- ACCESS- INDEX- TSLOCK-
METHOD TNAME TYPE COLS NAME ONLY MODE
0 TJ I 1 TJX1 N IS
1 TK I 1 TKX1 N IS
2 TL I 0 TLX1 Y S
3 0 N
Definitions: A join operation typically matches a row of one table with a row of
another on the basis of a join condition. For example, the condition might specify
that the value in column A of one table equals the value of column X in the other
table (WHERE T1.A = T2.X).
Example: Suppose that you issue the following statement to explain an outer join:
EXPLAIN PLAN SET QUERYNO = 10 FOR
SELECT PROJECT, COALESCE(PROJECTS.PROD#, PRODNUM) AS PRODNUM,
PRODUCT, PART, UNITS
FROM PROJECTS LEFT JOIN
(SELECT PART,
COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM,
PRODUCTS.PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#) AS TEMP
ON PROJECTS.PROD# = PRODNUM
Table 108 shows a subset of the plan table for the outer join.
Table 108. Plan table output for an example with outer joins
QUERYNO QBLOCKNO PLANNO TNAME JOIN_TYPE
10 1 1 PROJECTS
10 1 2 TEMP L
10 2 1 PRODUCTS
10 2 2 PARTS F
Column JOIN_TYPE identifies the type of outer join with one of these values:
v F for FULL OUTER JOIN
v L for LEFT OUTER JOIN
v Blank for INNER JOIN or no join
At execution, DB2 converts every right outer join to a left outer join; thus
JOIN_TYPE never identifies a right outer join specifically.
Materialization with outer join: Sometimes DB2 has to materialize a result table
when an outer join is used in conjunction with other joins, views, or nested table
expressions. You can tell when this happens by looking at the TABLE_TYPE and
TNAME columns of the plan table. When materialization occurs, TABLE_TYPE
contains a W, and TNAME shows the name of the materialized table as
DSNWFQB(xx), where xx is the number of the query block (QBLOCKNO) that
produced the work file.
SELECT A, B, X, Y
FROM (SELECT FROM OUTERT WHERE A=10)
LEFT JOIN INNERT ON B=X;
Method of joining
DB2 scans the composite (outer) table. For each row in that table that qualifies (by
satisfying the predicates on that table), DB2 searches for matching rows of the new
(inner) table. It concatenates any it finds with the current row of the composite
table. If no rows match the current row, then:
v For an inner join, DB2 discards the current row.
v For an outer join, DB2 concatenates a row of null values.
Stage 1 and stage 2 predicates eliminate unqualified rows during the join. (For an
explanation of those types of predicate, see “Stage 1 and stage 2 predicates” on
page 677.) DB2 can scan either table using any of the available access methods,
including table space scan.
Performance considerations
The nested loop join repetitively scans the inner table. That is, DB2 scans the outer
table once, and scans the inner table as many times as the number of qualifying
rows in the outer table. Therefore, the nested loop join is usually the most efficient
join method when the values of the join column passed to the inner table are in
sequence and the index on the join column of the inner table is clustered, or the
number of rows retrieved in the inner table through the index is small.
Example: left outer join: Figure 211 on page 755 illustrates a nested loop for a
left outer join. The outer join preserves the unmatched row in OUTERT with values
A=10 and B=6. The same join method for an inner join differs only in discarding that
row.
Example: one-row table priority: For a case like the following example, with a
unique index on T1.C2, DB2 detects that T1 has only one row that satisfies the
search condition. DB2 makes T1 the first table in a nested loop join.
SELECT * FROM T1, T2
WHERE T1.C1 = T2.C1 AND
T1.C2 = 5;
Example: Cartesian join with small tables first: A Cartesian join is a form of
nested loop join in which there are no join predicates between the two tables. DB2
usually avoids a Cartesian join, but sometimes it is the most efficient method, as in
the following example. The query uses three tables: T1 has 2 rows, T2 has 3 rows,
and T3 has 10 million rows.
SELECT * FROM T1, T2, T3
WHERE T1.C1 = T3.C1 AND
T2.C2 = T3.C2 AND
T3.C3 = 5;
Join predicates are between T1 and T3 and between T2 and T3. There is no join
predicate between T1 and T2.
Assume that 5 million rows of T3 have the value C3=5. Processing time is large if
T3 is the outer table of the join and tables T1 and T2 are accessed for each of 5
million rows.
However if all rows from T1 and T2 are joined, without a join predicate, the 5 million
rows are accessed only six times, once for each row in the Cartesian join of T1 and
T2. It is difficult to say which access path is the most efficient. DB2 evaluates the
different options and could decide to access the tables in the sequence T1, T2, T3.
Sorting the composite table: Your plan table could show a nested loop join that
includes a sort on the composite table. DB2 might sort the composite table (the
outer table in Figure 211) if the following conditions exist:
v The join columns in the composite table and the new table are not in the same
sequence.
v The join column of the composite table has no index.
v The index is poorly clustered.
Nested loop join with a sorted composite table has the following performance
advantages:
v Uses sequential detection efficiently to prefetch data pages of the new table,
reducing the number of synchronous I/O operations and the elapsed time.
v Avoids repetitive full probes of the inner table index by using the index
look-aside.
Method of joining
Figure 212 illustrates a merge scan join.
SELECT A, B, X, Y
FROM OUTER, INNER
WHERE A=10 AND B=X;
DB2 scans both tables in the order of the join columns. If no efficient indexes on
the join columns provide the order, DB2 might sort the outer table, the inner table,
or both. The inner table is put into a work file; the outer table is put into a work file
only if it must be sorted. When a row of the outer table matches a row of the inner
table, DB2 returns the combined rows.
DB2 then reads another row of the inner table that might match the same row of
the outer table and continues reading rows of the inner table as long as there is a
match. When there is no longer a match, DB2 reads another row of the outer table.
v If that row has the same value in the join column, DB2 reads again the matching
group of records from the inner table. Thus, a group of duplicate records in the
inner table is scanned as many times as there are matching records in the outer
table.
v If the outer row has a new value in the join column, DB2 searches ahead in the
inner table. It can find any of the following rows:
– Unmatched rows in the inner table, with lower values in the join column.
– A new matching inner row. DB2 then starts the process again.
– An inner row with a higher value of the join column. Now the row of the outer
table is unmatched. DB2 searches ahead in the outer table, and can find any
of the following rows:
- Unmatched rows in the outer table.
Performance considerations
A full outer join by this method uses all predicates in the ON clause to match the
two tables and reads every row at the time of the join. Inner and left outer joins use
only stage 1 predicates in the ON clause to match the tables. If your tables match
on more than one column, it is generally more efficient to put all the predicates for
the matches in the ON clause, rather than to leave some of them in the WHERE
clause.
For an inner join, DB2 can derive extra predicates for the inner table at bind time
and apply them to the sorted outer table to be used at run time. The predicates can
reduce the size of the work file needed for the inner table.
If DB2 has used an efficient index on the join columns, to retrieve the rows of the
inner table, those rows are already in sequence. DB2 puts the data directly into the
work file without sorting the inner table, which reduces the elapsed time.
INNER
X Y RIDs
OUTER
A B 1 Davis P5
Index 1 Index 2 Jones P2
10 1 2 Smith P7
10 1 3 Brown P4
10 2 5 Blake P1 5
10 3 7 Stone P6
10 6 9 Meyer P3 Composite table
A B X Y
10 2 2 Jones
2 X=B List prefetch 4 10 3 3 Brown
10 1 1 Davis
Intermediate table (phase 1) 10 1 1 Davis
OUTER INNER 10 2 2 Jones
data RIDs
RID List
10 1 P5
10 1 P5 P5
10 2 P2 P2
10 2 P7 P7
10 3 P4 P4
3 SORT
RID list
P2
P4
P5
Intermediate table (phase 2) P7
OUTER INNER
data RIDs
10 2 P2
10 3 P4
10 1 P5
10 1 P5
10 2 P7
Method of joining
The method requires obtaining RIDs in the order needed to use list prefetch. The
steps are shown in Figure 213. In that example, both the outer table (OUTER) and
the inner table (INNER) have indexes on the join columns.
Performance considerations
Hybrid join uses list prefetch more efficiently than nested loop join, especially if
there are indexes on the join predicate with low cluster ratios. It also processes
duplicates more efficiently because the inner table is scanned only once for each
set of duplicate values in the join column of the outer table.
If the index on the inner table is highly clustered, there is no need to sort the
intermediate table (SORTN_JOIN=N). The intermediate table is placed in a table in
memory rather than in a work file.
You can think of the fact table, which is much larger than the dimension tables, as
being in the center surrounded by dimension tables; the result resembles a star
formation. Figure 214 on page 761 illustrates the star formation.
Dimension Dimension
table table
Fact table
Dimension Dimension
table table
Figure 214. Star schema with a fact table and dimension tables
| Unlike the steps in the other join methods (nested loop join, merge scan join, and
| hybrid join) in which only two tables are joined in each step, a step in the star join
| method can involve three or more tables. Dimension tables are joined to the fact
| table via a multi-column index that is defined on the fact table. Therefore, having a
| well-defined, multi-column index on the fact table is critical for efficient star join
| processing.
In this scenario, the sales table contains three columns with IDs from the dimension
tables for time, product, and location instead of three columns for time, three
columns for products, and two columns for location. Thus, the size of the fact table
is greatly reduced. In addition, if you needed to change an item, you would do it
once in a dimension table instead of several times for each instance of the item in
the fact table.
| You can set the subsystem parameter STARJOIN by using the STAR JOIN
| QUERIES field on the DSNTIP8 installation panel.
v The number of tables in the star schema query block, including the fact table,
dimensions tables, and snowflake tables, meet the requirements that are
specified by the value of subsystem parameter SJTABLES. The value of
SJTABLES is considered only if the subsystem parameter STARJOIN qualifies
the query for star join. The values of SJTABLES are:
Examples: query with three dimension tables: Suppose that you have a store
in San Jose and want information about sales of audio equipment from that store in
2000. For this example, you want to join the following tables:
v A fact table for SALES (S)
v A dimension table for TIME (T) with columns for an ID, month, quarter, and year
v A dimension table for geographic LOCATION (L) with columns for an ID, city,
region, and country
v A dimension table for PRODUCT (P) with columns for an ID, product item, class,
and inventory
All snowflakes are processed before the central part of the star join, as individual
query blocks, and are materialized into work files. There is a work file for each
snowflake. The EXPLAIN output identifies these work files by naming them
DSN_DIM_TBLX(nn), where nn indicates the corresponding QBLOCKNO for the
snowflake.
This next example shows the plan for a star join that contains two snowflakes.
Suppose that two new tables MANUFACTURER (M) and COUNTRY (C) are added
to the tables in the previous example to break dimension tables PRODUCT (P) and
LOCATION (L) into snowflakes:
v The PRODUCT table has a new column MID that represents the manufacturer.
v Table MANUFACTURER (M) has columns for MID and name to contain
manufacturer information.
v The LOCATION table has a new column CID that represents the country.
v Table COUNTRY (C) has columns for CID and name to contain country
information.
You could write the following query to join all the tables:
SELECT *
FROM SALES S, TIME T, PRODUCT P, MANUFACTURER M,
LOCATION L, COUNTRY C
WHERE S.TIME = T.ID AND
S.PRODUCT = P.ID AND
P.MID = M.MID AND
S.LOCATION = L.ID AND
L.CID = C.CID AND
T.YEAR = 2000 AND
M.NAME = ’some_company’;
The joins in the snowflakes are processed first, and each snowflake is materialized
into a work file. Therefore, when the main star join block (QBLOCKNO=1) is
processed, it contains four tables: SALES (the fact table), TIME (a base dimension
table), and the two snowflake work files.
In this example, in the main star join block, the star join method is used for the first
three tables (as indicated by S in the JOIN TYPE column of the plan table) and the
remaining work file is joined by the nested loop join with sparse index access on
the work file (as indicated by T in the ACCESSTYPE column for
DSN_DIM_TBLX(3)).
| To determine the size of the virtual memory pool, perform the following steps:
| 1. Determine the value of A. Estimate the number of star join queries that run
| concurrently.
| 2. Determine the value of B. Estimate the average number of work files that a star
| join query uses. In typical cases, with highly normalized star schemas, the
| average number is about three to six work files.
| 3. Determine the value of C. Estimate the number of work-file rows, the maximum
| length of the key, and the total of the maximum length of the relevant columns.
| Multiply these three values together to find the size of the data caching space
| for the work file, or the value of C.
| 4. Multiply (A) * (B) * (C) to determine the size of the pool in MB.
| The default virtual memory pool size is 20 MB. To set the pool size, use the
| SJMXPOOL parameter on the DSNTIP4 installation panel.
| Example: The following example shows how to determine the size of the virtual
| memory pool. Suppose that you issue the following star join query, where SALES is
| the fact table:
| To determine the size of the dedicated virtual memory pool, perform the following
| steps:
| 1. Determine the value of A. Estimate the number of star join queries that run
| concurrently.
| In this example, based on the type of operation, up to 12 star join queries are
| expected run concurrently. Therefore, A = 12.
| 2. Determine the value of B. Estimate the average number of work files that a star
| join query uses.
| In this example, the star join query uses two work files, PROD and
| DSN_DIM_TBLX(02). Therefore B = 2.
| 3. Determine the value of C. Estimate the number of work-file rows, the maximum
| length of the key, and the total of the maximum length of the relevant columns.
| Multiply these three values together to find the size of the data caching space
| for the work file, or the value of C.
| Both PROD and DSN_DIM_TBLX(02) are used to determine the value of C.
| Recommendation: Average the values for a representative sample of work
| files, and round the value up to determine an estimate for a value of C.
| v The number of work-file rows depends on the number of rows that match the
| predicate. For PROD, 87 rows are stored in the work file because 87 rows
| match the IN-list predicate. No selective predicate is used for
| DSN_DIM_TBLX(02), so the entire result of the join is stored in the work file.
| The work file for DSN_DIM_TBLX(02) holds 2800 rows.
| v The maximum length of the key depends on the data type definition of the
| table’s key column. For PID, the key column for PROD, the maximum length
| is 4. DSN_DIM_TBLX(02) is a work file that results from the join of LOC and
| SCOUN. The key column that is used in the join is LID from the LOC table.
| The maximum length of LID is 4.
| v The maximum data length depends on the maximum length of the key
| column and the maximum length of the column that is selected as part of the
If DB2 does not choose prefetch at bind time, it can sometimes use it at execution
time nevertheless. The method is described in “Sequential detection at execution
time” on page 769.
For certain utilities (LOAD, REORG, RECOVER), the prefetch quantity can be twice
as much.
For an index scan that accesses eight or more consecutive data pages, DB2
requests sequential prefetch at bind time. The index must have a cluster ratio of
80% or higher. Both data pages and index pages are prefetched.
List prefetch can be used in conjunction with either single or multiple index access.
In a hybrid join, if the index is highly clustered, the page numbers might not be
sorted before accessing the data.
List prefetch can be used with most matching predicates for an index scan. IN-list
predicates are the exception; they cannot be the matching predicates when list
prefetch is used.
During execution, DB2 ends list prefetching if more than 25% of the rows in the
table (with a minimum of 4075) must be accessed. Record IFCID 0125 in the
performance trace, mapped by macro DSNDQW01, indicates whether list prefetch
ended.
When list prefetch ends, the query continues processing by a method that depends
on the current access path.
v For access through a single index or through the union of RID lists from two
indexes, processing continues by a table space scan.
v For index access before forming an intersection of RID lists, processing
continues with the next step of multiple index access. If no step remains and no
RID list has been accumulated, processing continues by a table space scan.
When DB2 forms an intersection of RID lists, if any list has 32 or fewer RIDs,
intersection stops and the list of 32 or fewer RIDs is used to access the data.
If a table is accessed repeatedly using the same statement (for example, DELETE
in a do-while loop), the data or index leaf pages of the table can be accessed
sequentially. This is common in a batch processing environment. Sequential
detection can then be used if access is through:
v SELECT or FETCH statements
v UPDATE and DELETE statements
v INSERT statements when existing data pages are accessed sequentially
DB2 can use sequential detection if it did not choose sequential prefetch at bind
time because of an inaccurate estimate of the number of pages to be accessed.
Sequential detection is not used for an SQL statement that is subject to referential
constraints.
The most recent eight pages are tracked. A page is considered page-sequential if it
is within P/2 advancing pages of the current page, where P is the prefetch quantity.
P is usually 32.
When data access is first declared sequential, which is called initial data access
sequential, three page ranges are calculated as follows:
v Let A be the page being requested. RUN1 is defined as the page range of length
P/2 pages starting at A.
v Let B be page A + P/2. RUN2 is defined as the page range of length P/2 pages
starting at B.
v Let C be page B + P/2. RUN3 is defined as the page range of length P pages
starting at C.
For example, assume that page A is 10. Figure 215 on page 771 illustrates the
page ranges that DB2 calculates.
P=32 pages 16 16 32
For initial data access sequential, prefetch is requested starting at page A for P
pages (RUN1 and RUN2). The prefetch quantity is always P pages.
For subsequent page requests where the page is 1) page sequential and 2) data
access sequential is still in effect, prefetch is requested as follows:
v If the desired page is in RUN1, no prefetch is triggered because it was already
triggered when data access sequential was first declared.
v If the desired page is in RUN2, prefetch for RUN3 is triggered and RUN2
becomes RUN1, RUN3 becomes RUN2, and RUN3 becomes the page range
starting at C+P for a length of P pages.
If a data access pattern develops such that data access sequential is no longer in
effect and, thereafter, a new pattern develops that is sequential, then initial data
access sequential is declared again and handled accordingly.
Because, at bind time, the number of pages to be accessed can only be estimated,
sequential detection acts as a safety net and is employed when the data is being
accessed sequentially.
In extreme situations, when certain buffer pool thresholds are reached, sequential
prefetch can be disabled. For a description of buffer pools and thresholds, see Part
5 (Volume 2) of DB2 Administration Guide.
Sorts of data
After you run EXPLAIN, DB2 sorts are indicated in PLAN_TABLE. The sorts can be
either sorts of the composite table or the new table. If a single row of PLAN_TABLE
has a ’Y’ in more than one of the sort composite columns, then one sort
accomplishes two things. (DB2 will not perform two sorts when two ’Y’s are in the
same row.) For instance, if both SORTC_ORDERBY and SORTC_UNIQ are ’Y’ in
one row of PLAN_TABLE, then a single sort puts the rows in order and removes
any duplicate rows as well.
The only reason DB2 sorts the new table is for join processing, which is indicated
by SORTN_JOIN.
Sorts of RIDs
To perform list prefetch, DB2 sorts RIDs into ascending page number order. This
sort is very fast and is done totally in memory. A RID sort is usually not indicated in
the PLAN_TABLE, but a RID sort normally is performed whenever list prefetch is
used. The only exception to this rule is when a hybrid join is performed and a
single, highly clustered index is used on the inner table. In this case SORTN_JOIN
is ’N’, indicating that the RID list for the inner table was not sorted.
Without parallelism:
v If no sorts are required, then OPEN CURSOR does not access any data. It is at
the first fetch that data is returned.
v If a sort is required, then the OPEN CURSOR causes the materialized result
table to be produced. Control returns to the application after the result table is
materialized. If a cursor that requires a sort is closed and reopened, the sort is
performed again.
v If there is a RID sort, but no data sort, then it is not until the first row is fetched
that the RID list is built from the index and the first data record is returned.
Subsequent fetches access the RID pool to access the next data record.
Merge
The merge process is more efficient than materialization, as described in
“Performance of merge versus materialization” on page 778. In the merge process,
the statement that references the view or table expression is combined with the
fullselect that defined the view or table expression. This combination creates a
logically equivalent statement. This equivalent statement is executed against the
database.
Example: Consider the following statements, one of which defines a view, the other
of which references the view:
View-defining statement: View referencing statement:
Example: The following statements show another example of when a view and
table expression can be merged:
SELECT * FROM V1 X
LEFT JOIN
(SELECT * FROM T2) Y ON X.C1=Y.C1
LEFT JOIN T3 Z ON X.C1=Z.C1;
Merged statement:
SELECT * FROM V1 X
LEFT JOIN
T2 ON X.C1 = T2.C1
LEFT JOIN T3 Z ON X.C1 = Z.C1;
Table 120 indicates some cases in which materialization occurs. DB2 can also use
materialization in statements that contain multiple outer joins, outer joins that
combine with inner joins, or merges that cause a join of greater than 15 tables.
Table 120. Cases when DB2 performs view or table expression materialization. The ″X″ indicates a case of
materialization. Notes follow the table.
View definition or table expression uses...2
Aggregate
SELECT FROM view or Aggregate function UNION
table expression uses...1 GROUP BY DISTINCT function DISTINCT UNION ALL(4)
Joins (3) X X X X X
GROUP BY X X X X X
DISTINCT X X X
Aggregate function X X X X X X
Aggregate
SELECT FROM view or Aggregate function UNION
table expression uses...1 GROUP BY DISTINCT function DISTINCT UNION ALL(4)
Aggregate function X X X X X
DISTINCT
SELECT subset of view X X
or table expression
columns
When DB2 chooses materialization, TNAME contains the name of the view or table
expression, and TABLE_TYPE contains a W. A value of Q in TABLE_TYPE for the
name of a view or nested table expression indicates that the materialization was
virtual and not actual. (Materialization can be virtual when the view or nested table
expression definition contains a UNION ALL that is not distributed.) When DB2
chooses merge, EXPLAIN data for the merged statement appears in PLAN_TABLE;
only the names of the base tables on which the view or table expression is defined
appear.
Example: Consider the following statements, which define a view and reference the
view:
View defining statement:
Table 121 shows a subset of columns in a plan table for the query.
Table 121. Plan table output for an example with view materialization
QBLOCK_ TABLE_
QBLOCKNO PLANNO TYPE TNAME TYPE METHOD
1 1 SELECT DEPT T 0
2 1 NOCOSUB V1DIS W 0
2 2 NOCOSUB ? 3
3 1 NOCOSUB EMP T 0
3 2 NOCOSUB ? 3
Notice how TNAME contains the name of the view and TABLE_TYPE contains W to
indicate that DB2 chooses materialization for the reference to the view because of
the use of SELECT DISTINCT in the view definition.
Example: Consider the following statements, which define a view and reference the
view:
If the VIEW was defined without DISTINCT, DB2 would choose merge instead of
materialization. In the sample output, the name of the view does not appear in the
plan table, but the table name on which the view is based does appear.
For an example of when a view definition contains a UNION ALL and DB2 can
distribute joins and aggregations and avoid materialization, see “Using EXPLAIN to
determine UNION activity and query rewrite.” When DB2 avoids materialization in
such cases, TABLE_TYPE contains a Q to indicate that DB2 uses an intermediate
result that is not materialized, and TNAME shows the name of this intermediate
result as DSNWFQB(xx), where xx is the number of the query block that produced
the result.
The QBLOCK_TYPE column in the plan table indicates union activity. For a UNION
ALL, the column contains ’UNIONA’. For UNION, the column contains ’UNION’.
When QBLOCK_TYPE=’UNION’, the METHOD column on the same row is set to 3
and the SORTC_UNIQ column is set to ’Y’ to indicate that a sort is necessary to
remove duplicates. As with other views and table expressions, the plan table also
shows when DB2 uses materialization instead of merge.
Example: Consider the following statements, which define a view, reference the
view, and show how DB2 rewrites the referencing statement:
View defining statement: View is created on three tables that contain weekly data
View referencing statement: For each customer in California, find the average
charges during the first and third Friday of January 2000
Table 123 shows a subset of columns in a plan table for the query.
Table 123. Plan table output for an example with a view with UNION ALLs
QBLOCK_ PARENT_
QBLOCKNO PLANNO TNAME TABLE_TYPE METHOD TYPE QBLOCK
1 1 DSNWFQB(02) Q 0 0
1 2 ? 3 0
2 1 ? 0 UNIONA 1
3 1 CUST T 0 2
3 2 WEEK1 T 1 2
4 1 CUST T 0 2
4 2 WEEK3 T 2 2
Notice how DB2 eliminates the second subselect of the view definition from the
rewritten query and how the plan table indicates this removal by showing a UNION
ALL for only the first and third subselect in the view definition. The Q in the
TABLE_TYPE column indicates that DB2 does not materialize the view.
| Your statement table can use an older format in which the STMT_ENCODE column
| does not exist, PROGNAME has a data type of CHAR(8), and COLLID has a data
| type of CHAR(18). However, use the most current format because it gives you the
| most information. You can alter a statement table in the older format to a statement
| table in the current format.
Just as with the plan table, DB2 just adds rows to the statement table; it does not
automatically delete rows. INSERT triggers are not activated unless you insert rows
yourself using and SQL INSERT statement.
To clear the table of obsolete rows, use DELETE, just as you would for deleting
rows from any table. You can also use DROP TABLE to drop a statement table
completely.
Similarly, if system administrators use these estimates as input into the resource
limit specification table for governing (either predictive or reactive), they probably
would want to give much greater latitude for statements in cost category B than for
those in cost category A.
What goes into cost category B? DB2 puts a statement’s estimate into cost
category B when any of the following conditions exist:
v The statement has UDFs.
v Triggers are defined for the target table:
– The statement is INSERT, and insert triggers are defined on the target table.
What goes into cost category A? DB2 puts everything that doesn’t fall into
category B into category A.
Query I/O parallelism manages concurrent I/O requests for a single query, fetching
pages into the buffer pool in parallel. This processing can significantly improve the
performance of I/O-bound queries. I/O parallelism is used only when one of the
other parallelism modes cannot be used.
Query CP parallelism enables true multi-tasking within a query. A large query can
be broken into multiple smaller queries. These smaller queries run simultaneously
on multiple processors accessing data in parallel. This reduces the elapsed time for
a query.
Parallel operations usually involve at least one table in a partitioned table space.
Scans of large partitioned table spaces have the greatest performance
improvements where both I/O and central processor (CP) operations can be carried
out in parallel.
CP
processing: … …
P1R1 P1R2 P1R3 P2R1 P2R2 P2R3 P3R1
I/O:
P1R1 P1R2 P1R3
… P2R1 P2R2 P2R3
… P3R1 P3R2
Time line
Figure 218 shows parallel I/O operations. With parallel I/O, DB2 prefetches data
from the 3 partitions at one time. The processor processes the first request from
each partition, then the second request from each partition, and so on. The
processor is not waiting for I/O, but there is still only one processing task.
CP processing: …
P1R1 P2R1 P3R1 P1R2 P2R2 P3R2 P1R3
I/O:
P1 R1 R2 R3
P2 R1 R2 R3
P3 R1 R2 R3
Time line
Figure 219 on page 787 shows parallel CP processing. With CP parallelism, DB2
can use multiple parallel tasks to process the query. Three tasks working
concurrently can greatly reduce the overall elapsed time for data-intensive and
processor-intensive queries. The same principle applies for Sysplex query
parallelism, except that the work can cross the boundaries of a single CPC.
CP task 2:
P2R1 P2R2 P2R3
…
I/O:
P2R1 P2R2 P2R3
…
CP task 3:
P3R1 P3R2 P3R3
…
I/O:
P3R1 P3R2 P3R3
…
Time line
Figure 219. CP and I/O processing techniques. Query processing using CP parallelism. The
tasks can be contained within a single CPC or can be spread out among the members of a
data sharing group.
Queries that are most likely to take advantage of parallel operations: Queries
that can take advantage of parallel processing are:
v Those in which DB2 spends most of the time fetching pages—an I/O-intensive
query
A typical I/O-intensive query is something like the following query, assuming that
a table space scan is used on many pages:
SELECT COUNT(*) FROM ACCOUNTS
WHERE BALANCE > 0 AND
DAYS_OVERDUE > 30;
v Those in which DB2 spends a lot of processor time and also, perhaps, I/O time,
to process rows. Those include:
– Queries with intensive data scans and high selectivity. Those queries involve
large volumes of data to be scanned but relatively few rows that meet the
search criteria.
– Queries containing aggregate functions. Column functions (such as MIN,
MAX, SUM, AVG, and COUNT) usually involve large amounts of data to be
scanned but return only a single aggregate result.
– Queries accessing long data rows. Those queries access tables with long
data rows, and the ratio of rows per page is very low (one row per page, for
example).
– Queries requiring large amounts of central processor time. Those queries
might be read-only queries that are complex, data-intensive, or that involve a
sort.
A typical processor-intensive query is something like:
SELECT MAX(QTY_ON_HAND) AS MAX_ON_HAND,
AVG(PRICE) AS AVG_PRICE,
AVG(DISCOUNTED_PRICE) AS DISC_PRICE,
SUM(TAX) AS SUM_TAX,
SUM(QTY_SOLD) AS SUM_QTY_SOLD,
SUM(QTY_ON_HAND - QTY_BROKEN) AS QTY_GOOD,
AVG(DISCOUNT) AS AVG_DISCOUNT,
ORDERSTATUS,
COUNT(*) AS COUNT_ORDERS
FROM ORDER_TABLE
Terminology: When the term task is used with information about parallel
processing, the context should be considered. For parallel query CP processing or
Sysplex query parallelism, a task is an actual z/OS execution unit used to process a
query. For parallel I/O processing, a task simply refers to the processing of one of
the concurrent I/O streams.
A parallel group is the term used to name a particular set of parallel operations
(parallel tasks or parallel I/O operations). A query can have more than one parallel
group, but each parallel group within the query is identified by its own unique ID
number.
The degree of parallelism is the number of parallel tasks or I/O operations that
| DB2 determines can be used for the operations on the parallel group. The
| maximum of parallel operations that DB2 can generate is 254. However, for most
| queries and DB2 environments, DB2 chooses a lover number. You might need to
| limit the maximum number further because more parallel operations consume
| processor, real storage, and I/O resources. If resource consumption in high in your
| parallelism environment, use the MAX DEGREE field on installation panel DSNTIP4
| to explicitly limit the maximum number of parallel operations that DB2 generates, as
| explain in “Enabling parallel processing.”
DB2 also considers only parallel I/O operations if you declare a cursor WITH HOLD
and bind with isolation RR or RS. For more restrictions on parallelism, see
Table 126.
For complex queries, run the query in parallel within a member of a data sharing
group. With Sysplex query parallelism, use the power of the data sharing group to
process individual complex queries on many members of the data sharing group.
For more information about how you can use the power of the data sharing group
to run complex queries, see Chapter 6 of DB2 Data Sharing: Planning and
Administration.
Limiting the degree of parallelism: If you want to limit the maximum number of
parallel tasks that DB2 generates, you can use the MAX DEGREE field on
installation panel DSNTIP4. Changing MAX DEGREE, however, is not the way to
turn parallelism off. You use the DEGREE bind parameter or CURRENT DEGREE
special register to turn parallelism off.
| In a multi-table join, DB2 might also execute the sort for a composite that
| involves more than one table in a parallel task. DB2 uses a cost basis model to
| determine whether to use parallel sort in all cases. When DB2 decides to use
| parallel sort, SORTC_PGROUP_ID and SORTN_PGROUP_ID indicate the
| parallel group identifier. Consider a query that joins three tables, T1, T2, and T3,
| and uses a merge scan join between T1 and T2, and then between the
| composite and T3. If DB2 decides, based on the cost model, that all sorts in this
| query are to be performed in parallel, part of PLAN_TABLE appears as shown in
| Table 130 on page 792:
A parallel group can run at a parallel degree less than that shown in the
PLAN_TABLE output. The following factors can cause a reduced degree of
parallelism:
v Buffer pool availability
v Logical contention.
Consider a nested loop join. The inner table could be in a partitioned or
nonpartitioned table space, but DB2 is more likely to use a parallel join operation
when the outer table is partitioned.
v Physical contention
v Run-time host variables
A host variable can determine the qualifying partitions of a table for a given
query. In such cases, DB2 defers the determination of the planned degree of
parallelism until run time, when the host variable value is known.
v Updatable cursor
At run time, DB2 might determine that an ambiguous cursor is updatable.
v A change in the configuration of online processors
If fewer processors are online at run time, DB2 might need to reformulate the
parallel degree.
You can use system controls to disable parallelism, as well. These are described in
Part 5 (Volume 2) of DB2 Administration Guide.
The following sections discuss scenarios for interaction among your program, DB2,
and ISPF. Each has advantages and disadvantages in terms of efficiency, ease of
coding, ease of maintenance, and overall flexibility.
The DSN command processor (see “DSN command processor” on page 485)
permits only single task control block (TCB) connections. Take care not to change
the TCB after the first SQL statement. ISPF SELECT services change the TCB if
you started DSN under ISPF, so you cannot use these to pass control from load
module to load module. Instead, use LINK, XCTL, or LOAD.
Figure 220 on page 796 shows the task control blocks that result from attaching the
DSN command processor below TSO or ISPF.
If you are in ISPF and running under DSN, you can perform an ISPLINK to another
program, which calls a CLIST. In turn, the CLIST uses DSN and another
application. Each such use of DSN creates a separate unit of recovery (process or
transaction) in DB2.
All such initiated DSN work units are unrelated, with regard to isolation (locking)
and recovery (commit). It is possible to deadlock with yourself; that is, one unit
(DSN) can request a serialized resource (a data page, for example) that another
unit (DSN) holds incompatibly.
A COMMIT in one program applies only to that process. There is no facility for
coordinating the processes.
The application has one large load module and one plan.
Disadvantages: For large programs of this type, you want a more modular design,
making the plan more flexible and easier to maintain. If you have one large plan,
you must rebind the entire plan whenever you change a module that includes SQL
statements. 1 You cannot pass control to another load module that makes SQL calls
by using ISPLINK; rather, you must use LINK, XCTL, or LOAD and BALR.
1. To achieve a more modular construction when all parts of the program use SQL, consider using packages. See Chapter 17,
“Planning for DB2 program preparation,” on page 363.
You then need to leave ISPF before you can start your application.
When you use the ISPF SELECT service, you can specify whether ISPF should
create a new ISPF variable pool before calling the function. You can also break a
large application into several independent parts, each with its own ISPF variable
pool.
You can call different parts of the program in different ways. For example, you can
use the PGM option of ISPF SELECT:
PGM(program-name) PARM(parameters)
For a part that accesses DB2, the command can name a CLIST that starts DSN:
DSN
RUN PROGRAM(PART1) PLAN(PLAN1) PARM(input from panel)
END
Breaking the application into separate modules makes it more flexible and easier to
maintain. Furthermore, some of the application might be independent of DB2;
portions of the application that do not call DB2 can run, even if DB2 is not running.
A stopped DB2 database does not interfere with parts of the program that refer only
to other databases.
Chapter 29. Programming for the Interactive System Productivity Facility (ISPF) 797
With the same modular structure as in the previous example, using CAF is likely to
provide greater efficiency by reducing the number of CLISTs. This does not mean,
however, that any DB2 function executes more quickly.
Disadvantages: Compared to the modular structure using DSN, the structure using
CAF is likely to require a more complex program, which in turn might require
assembler language subroutines. For more information, see Chapter 30,
“Programming for the call attachment facility (CAF),” on page 799.
It is also possible for IMS batch applications to access DB2 databases through
CAF, though that method does not coordinate the commitment of work between the
IMS and DB2 systems. We highly recommend that you use the DB2 DL/I batch
support for IMS batch applications.
CICS application programs must use the CICS attachment facility; IMS application
programs, the IMS attachment facility. Programs running in TSO foreground or TSO
background can use either the DSN command processor or CAF; each has
advantages and disadvantages.
Prerequisite knowledge: Analysts and programmers who consider using CAF must
be familiar with z/OS concepts and facilities in the following areas:
v The CALL macro and standard module linkage conventions
v Program addressing and residency options (AMODE and RMODE)
v Creating and controlling tasks; multitasking
v Functional recovery facilities such as ESTAE, ESTAI, and FRRs
v Asynchronous events and TSO attention exits (STAX)
v Synchronization techniques such as WAIT/POST.
Each connected task can run a plan. Multiple tasks in a single address space can
specify the same plan, but each instance of a plan runs independently from the
others. A task can terminate its plan and run a different plan without fully breaking
its connection to DB2.
CAF does not generate task structures, nor does it provide attention processing
exits or functional recovery routines. You can provide whatever attention handling
and functional recovery your application needs, but you must use ESTAE/ESTAI
type recovery routines and not Enabled Unlocked Task (EUT) FRR routines.
Programming language
You can write CAF applications in assembler language, C, COBOL, Fortran, and
PL/I. When choosing a language to code your application in, consider these
restrictions:
v If you need to use z/OS macros (ATTACH, WAIT, POST, and so on), you must
choose a programming language that supports them or else embed them in
modules written in assembler language.
v The CAF TRANSLATE function is not available from Fortran. To use the function,
code it in a routine written in another language, and then call that routine from
Fortran.
You can find a sample assembler program (DSN8CA) and a sample COBOL
program (DSN8CC) that use the call attachment facility in library prefix.SDSNSAMP.
A PL/I application (DSN8SPM) calls DSN8CA, and a COBOL application
(DSN8SCM) calls DSN8CC. For more information about the sample applications
and on accessing the source code, see Appendix B, “Sample applications,” on page
915.
Tracing facility
A tracing facility provides diagnostic messages that aid in debugging programs and
diagnosing errors in the CAF code. In particular, attempts to use CAF incorrectly
cause error messages in the trace stream.
Program preparation
Preparing your application program to run in CAF is similar to preparing it to run in
other environments, such as CICS, IMS, and TSO. You can prepare a CAF
application either in the batch environment or by using the DB2 program
preparation process. You can use the program preparation system either through
DB2I or through the DSNH CLIST. For examples and guidance in program
preparation, see Chapter 21, “Preparing an application program to run,” on page
453.
CAF requirements
When you write programs that use CAF, be aware of the following characteristics.
Use of LOAD
CAF uses z/OS SVC LOAD to load two modules as part of the initialization
following your first service request. Both modules are loaded into fetch-protected
storage that has the job-step protection key. If your local environment intercepts and
replaces the LOAD SVC, you must ensure that your version of LOAD manages the
load list element (LLE) and contents directory entry (CDE) chains like the standard
z/OS LOAD macro.
Run environment
Applications requesting DB2 services must adhere to several run environment
characteristics. Those characteristics must be in effect regardless of the attachment
facility you use. They are not unique to CAF.
v The application must be running in TCB mode. SRB mode is not supported.
v An application task cannot have any EUT FRRs active when requesting DB2
services. If an EUT FRR is active, the DB2 functional recovery can fail, and your
application can receive some unpredictable abends.
v Different attachment facilities cannot be active concurrently within the same
address space. Therefore:
– An application must not use CAF in an CICS or IMS address space.
– An application that runs in an address space that has a CAF connection to
DB2 cannot connect to DB2 using RRSAF.
– An application that runs in an address space that has an RRSAF connection
to DB2 cannot connect to DB2 using CAF.
– An application cannot invoke the z/OS AXSET macro after executing the CAF
CONNECT call and before executing the CAF DISCONNECT call.
v One attachment facility cannot start another. This means that your CAF
application cannot use DSN, and a DSN RUN subcommand cannot call your
CAF application.
v The language interface module for CAF, DSNALI, is shipped with the linkage
attributes AMODE(31) and RMODE(ANY). If your applications load CAF below
the 16-MB line, you must link-edit DSNALI again.
Running DSN applications with CAF is not advantageous, and the loss of DSN
services can affect how well your program runs. In general, running DSN
Chapter 30. Programming for the call attachment facility (CAF) 801
applications with CAF is not recommended unless you provide an application
controller to manage the DSN application and replace any needed DSN functions.
Even then, you could have to change the application to communicate connection
failures to the controller correctly.
When the language interface is available, your program can make use of the CAF
in two ways:
v Implicitly, by including SQL statements or IFI calls in your program just as you
would in any program. The CAF facility establishes the connections to DB2 using
default values for the pertinent parameters described under “Implicit connections”
on page 804.
v Explicitly, by writing CALL DSNALI statements, providing the appropriate options.
For the general form of the statements, see “CAF function descriptions” on page
807.
The first element of each option list is a function, which describes the action that
you want CAF to take. For the available values of function and an approximation of
their effects, see “Summary of connection functions” on page 804. The effect of any
function depends in part on what functions the program has already run. Before
using any function, be sure to read the description of its usage. Also read
“Summary of CAF behavior” on page 819, which describes the influence of previous
functions.
You might structure a CAF configuration like the one that is illustrated in Figure 221
on page 803. The application contains statements to load DSNALI, DSNHLI2, and
DSNWLI2. The application accesses DB2 by using the CAF Language Interface. It
calls DSNALI to handle CAF requests, DSNWLI to handle IFI calls, and DSNHLI to
handle SQL calls.
Implicit connections
If you do not explicitly specify executable SQL statements in a CALL DSNALI
statement of your CAF application, CAF initiates implicit CONNECT and OPEN
requests to DB2. Although CAF performs these connection requests using the
following default values, the requests are subject to the same DB2 return codes and
reason codes as explicitly specified requests.
For implicit connection requests, register 15 contains the return code, and register 0
contains the reason code. The return code and reason code are also in the
message text for SQLCODE -991. The application program should examine the
return and reason codes immediately after the first executable SQL statement within
the application program. Two ways to do this are to:
v Examine registers 0 and 15 directly.
v Examine the SQLCA, and if the SQLCODE is -991, obtain the return and reason
code from the message text. The return code is the first token, and the reason
code is the second token.
If the implicit connection was successful, the application can examine the
SQLCODE for the first, and subsequent, SQL statements.
You can access the DSNALI module by either explicitly issuing LOAD requests
when your program runs, or by including the module in your load module when you
link-edit your program. There are advantages and disadvantages to each approach.
By explicitly loading the DSNALI module, you beneficially isolate the maintenance of
your application from future IBM maintenance to the language interface. If the
language interface changes, the change will probably not affect your load module.
You must indicate to DB2 which entry point to use. You can do this in one of two
ways:
v Specify the precompiler option ATTACH(CAF).
This causes DB2 to generate calls that specify entry point DSNHLI2. You cannot
use this option if your application is written in Fortran.
v Code a dummy entry point named DSNHLI within your load module.
Chapter 30. Programming for the call attachment facility (CAF) 805
If you do not specify the precompiler option ATTACH, the DB2 precompiler
generates calls to entry point DSNHLI for each SQL request. The precompiler
does not know and is independent of the different DB2 attachment facilities.
When the calls generated by the DB2 precompiler pass control to DSNHLI, your
code corresponding to the dummy entry point must preserve the option list
passed in R1 and call DSNHLI2 specifying the same option list. For a coding
example of a dummy DSNHLI entry point, see “Using dummy entry point
DSNHLI” on page 828.
Link-editing DSNALI
You can include the CAF language interface module DSNALI in your load module
during a link-edit step. The module must be in a load module library, which is
included either in the SYSLIB concatenation or another INCLUDE library defined in
the linkage editor JCL. Because all language interface modules contain an entry
point declaration for DSNHLI, the linkage editor JCL must contain an INCLUDE
linkage editor control statement for DSNALI; for example, INCLUDE
DB2LIB(DSNALI). By coding these options, you avoid inadvertently picking up the
wrong language interface module.
If you do not need explicit calls to DSNALI for CAF functions, including DSNALI in
your load module has some advantages. When you include DSNALI during the
link-edit, you need not code the previously described dummy DSNHLI entry point in
your program or specify the precompiler option ATTACH. Module DSNALI contains
an entry point for DSNHLI, which is identical to DSNHLI2, and an entry point
DSNWLI, which is identical to DSNWLI2.
A disadvantage to link-editing DSNALI into your load module is that any IBM
maintenance to DSNALI requires a new link-edit of your load module.
Task termination
If a connected task terminates normally before the CLOSE function deallocates the
plan, DB2 commits any database changes that the thread made since the last
In either case, DB2 deallocates the plan, if necessary, and terminates the task’s
connection before it allows the task to terminate.
DB2 abend
If DB2 abends while an application is running, the application is rolled back to the
last commit point. If DB2 terminates while processing a commit request, DB2 either
commits or rolls back any changes at the next restart. The action taken depends on
the state of the commit request when DB2 terminates.
A description of the call attach register and parameter list conventions for assembler
language follow. Following it, the syntax description of specific functions describe
the parameters for those particular functions.
Register conventions
If you do not specify the return code and reason code parameters in your CAF
calls, CAF puts a return code in register 15 and a reason code in register 0. CAF
also supports high-level languages that cannot interrogate individual registers. See
Figure 222 on page 808 and the discussion following it for more information. The
contents of registers 2 through 14 are preserved across calls. You must conform to
the standard calling conventions listed in Table 132:
Table 132. Standard usage of registers R1 and R13-R15
Register Usage
R1 Parameter list pointer (for details, see “Call DSNALI parameter list”)
R13 Address of caller’s save area
R14 Caller’s return address
R15 CAF entry point address
When you code CALL DSNALI statements, you must specify all parameters that
come before the return code parameter. You cannot omit any of those parameters
by coding zeros or blanks. There are no defaults for those parameters for explicit
connection service requests. Defaults are provided only for implicit connections.
All parameters starting with the return code parameter are optional.
For all languages except assembler language, code zero for a parameter in the
CALL DSNALI statement when you want to use the default value for that parameter
but specify subsequent parameters. For example, suppose you are coding a
CONNECT call in a COBOL program. You want to specify all parameters except the
return code parameter. Write the call in this way:
Chapter 30. Programming for the call attachment facility (CAF) 807
CALL ’DSNALI’ USING FUNCTN SSID TECB SECB RIBPTR
BY CONTENT ZERO BY REFERENCE REASCODE SRDURA EIBPTR.
For an assembler language call, code a comma for a parameter in the CALL
DSNALI statement when you want to use the default value for that parameter but
specify subsequent parameters. For example, code a CONNECT call like this to
specify all optional parameters except the return code parameter:
CALL DSNALI,(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,,REASCODE,SRDURA,EIBPTR,GROUPOVERRIDE)
Figure 222 illustrates how you can use the indicator end of parameter list to control
the return codes and reason code fields following a CAF CONNECT call. Each of
the six illustrated termination points applies to all CAF parameter lists:
1. Terminates the parameter list without specifying the parameters retcode,
reascode, and srdura, and places the return code in register 15 and the reason
code in register 0.
Terminating at this point ensures compatibility with CAF programs that require a
return code in register 15 and a reason code in register 0.
2. Terminates the parameter list after the parameter retcode, and places the return
code in the parameter list and the reason code in register 0.
Terminating at this point permits the application program to take action, based
on the return code, without further examination of the associated reason code.
3. Terminates the parameter list after the parameter reascode, and places the
return code and the reason code in the parameter list.
Even if you specify that the return code be placed in the parameter list, it is also
placed in register 15 to accommodate high-level languages that support special
return code processing.
“DSNALI CONNECT function” shows the syntax for the CONNECT function.
)
,retcode
,reascode
,srdura
,eibptr
,groupoverride
Chapter 30. Programming for the call attachment facility (CAF) 809
to use a startup ECB, specify a subsystem name, rather than a group
attachment name. That subsystem name must be different from the group
attachment name.
If your ssnm is less than four characters long, pad it on the right with blanks to
a length of four characters.
termecb
The application’s event control block (ECB) for DB2 termination. DB2 posts this
ECB when the operator enters the STOP DB2 command or when DB2 is
abnormally terminating. It indicates the type of termination by a POST code, as
shown in Table 133:
Table 133. POST codes and related termination types
POST code Termination type
8 QUIESCE
12 FORCE
16 ABTERM
Before you check termecb in your CAF application program, first check the
return code and reason code from the CONNECT call to ensure that the call
completed successfully. See “Checking return codes and reason codes” on
page 826 for more information.
startecb
The application’s startup ECB. If DB2 has not yet started when the application
issues the call, DB2 posts the ECB when it successfully completes its startup
processing. DB2 posts at most one startup ECB per address space. The ECB is
the one associated with the most recent CONNECT call from that address
space. Your application program must examine any nonzero CAF/DB2 reason
codes before issuing a WAIT on this ECB.
If ssnm is a group attachment name, the first DB2 subsystem that starts on the
local z/OS system and matches the specified group attachment name posts the
ECB.
ribptr
A 4-byte area in which CAF places the address of the release information block
(RIB) after the call. You can determine what release level of DB2 you are
currently running by examining field RIBREL. You can determine the
modification level within the release level by examining fields RIBCNUMB and
RIBCINFO. If the value in RIBCNUMB is greater than zero, check RIBCINFO
for modification levels.
If the RIB is not available (for example, if you name a subsystem that does not
exist), DB2 sets the 4-byte area to zeros.
The area to which ribptr points is below the 16-MB line.
Your program does not have to use the release information block, but it cannot
omit the ribptr parameter.
Macro DSNDRIB maps the release information block (RIB). It can be found in
prefix.SDSNMACS(DSNDRIB).
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not specified, CAF places the return code in register 15
and the reason code in register 0.
Chapter 30. Programming for the call attachment facility (CAF) 811
Using a CONNECT call is optional. The first request from a task, either OPEN, or
an SQL or IFI call, causes CAF to issue an implicit CONNECT request. If a task is
connected implicitly, the connection to DB2 is terminated either when you execute
CLOSE or when the task terminates.
You can run CONNECT from any or all tasks in the address space, but the address
space level is initialized only once when the first task connects.
If a task does not issue an explicit CONNECT or OPEN, the implicit connection
from the first SQL or IFI call specifies a default DB2 subsystem name. A systems
programmer or administrator determines the default subsystem name when
installing DB2. Be certain that you know what the default name is and that it names
the specific DB2 subsystem you want to use.
Practically speaking, you must not mix explicit CONNECT and OPEN requests with
implicitly established connections in the same address space. Either explicitly
specify which DB2 subsystem you want to use or allow all requests to use the
default subsystem.
Do not issue CONNECT requests from a TCB that already has an active DB2
connection. (See “Summary of CAF behavior” on page 819 and “Error messages
and dsntrace” on page 822 for more information about CAF errors.)
“DSNALI OPEN function” shows the syntax for the OPEN function.
)
, retcode
, reascode
, groupoverride
Chapter 30. Programming for the call attachment facility (CAF) 813
plan
An 8-byte DB2 plan name.
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not specified, CAF places the return code in register 15
and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not specified, CAF places
the reason code in register 0.
This field is optional. If specified, you must also specify retcode.
groupoverride
An 8-byte area that the application provides. This field is optional. If this field is
provided, it contains the string ’NOGROUP’. This string indicates that the
subsystem name that is specified by ssnm is to be used as a DB2 subsystem
name, even if ssnm matches a group attachment name. If groupoverride is not
provided, ssnm is used as the group attachment name if it matches a group
attachment name. If you specify this parameter in any language except
assembler, you must also specify the return code and reason code parameters.
In assembler language, you can omit the return code and reason code
parameters by specifying commas as place-holders.
Usage: OPEN allocates DB2 resources needed to run the plan or issue IFI
requests. If the requesting task does not already have a connection to the named
DB2 subsystem, then OPEN establishes it.
OPEN allocates the plan to the DB2 subsystem named in ssnm. The ssnm
parameter, like the others, is required, even if the task issues a CONNECT call. If a
task issues CONNECT followed by OPEN, then the subsystem names for both calls
must be the same.
The use of OPEN is optional. If you do not use OPEN, the action of OPEN occurs
on the first SQL or IFI call from the task, using the defaults listed under “Implicit
connections” on page 804.
“DSNALI CLOSE function” shows the syntax for the CLOSE function.
Usage: CLOSE deallocates the created plan either explicitly using OPEN or
implicitly at the first SQL call.
If you did not issue a CONNECT for the task, CLOSE also deletes the task’s
connection to DB2. If no other task in the address space has an active connection
to DB2, DB2 also deletes the control block structures created for the address space
and removes the cross memory authorization.
Chapter 30. Programming for the call attachment facility (CAF) 815
Do not use CLOSE when your current task does not have a plan allocated.
Using CLOSE is optional. If you omit it, DB2 performs the same actions when your
task terminates, using the SYNC parameter if termination is normal and the ABRT
parameter if termination is abnormal. (The function is an implicit CLOSE.) If the
objective is to shut down your application, you can improve shut down performance
by using CLOSE explicitly before the task terminates.
If you want to use a new plan, you must issue an explicit CLOSE, followed by an
OPEN, specifying the new plan name.
If DB2 terminates, a task that did not issue CONNECT should explicitly issue
CLOSE, so that CAF can reset its control blocks to allow for future connections.
This CLOSE returns the reset accomplished return code (+004) and reason code
X'00C10824'. If you omit CLOSE, then when DB2 is back on line, the task’s next
connection request fails. You get either the message YOUR TCB DOES NOT HAVE
A CONNECTION, with X'00F30018' in register 0, or CAF error message DSNA201I
or DSNA202I, depending on what your application tried to do. The task must then
issue CLOSE before it can reconnect to DB2.
A task that issued CONNECT explicitly should issue DISCONNECT to cause CAF
to reset its control blocks when DB2 terminates. In this case, CLOSE is not
necessary.
“DSNALI DISCONNECT function” on page 817 shows the syntax for the
DISCONNECT function.
Only those tasks that issued CONNECT explicitly can issue DISCONNECT. If
CONNECT was not used, then DISCONNECT causes an error.
Using DISCONNECT is optional. Without it, DB2 performs the same functions when
the task terminates. (The function is an implicit DISCONNECT.) If the objective is to
shut down your application, you can improve shut down performance if you request
DISCONNECT explicitly before the task terminates.
If DB2 terminates, a task that issued CONNECT must issue DISCONNECT to reset
the CAF control blocks. The function returns the reset accomplished return codes
and reason codes (+004 and X'00C10824'), and ensures that future connection
requests from the task work when DB2 is back on line.
A task that did not issue CONNECT explicitly must issue CLOSE to reset the CAF
control blocks when DB2 terminates.
Chapter 30. Programming for the call attachment facility (CAF) 817
Table 137. Examples of CAF DISCONNECT calls (continued)
Language Call example
COBOL CALL ’DSNALI’ USING FUNCTN RETCODE REASCODE.
Fortran CALL DSNALI(FUNCTN,RETCODE,REASCODE)
PL/I CALL DSNALI(FUNCTN,RETCODE,REASCODE);
Note: DSNALI is an assembler language program; therefore, the following compiler directives must be included in
your C and PL/I applications:
C #pragma linkage(dsnali, OS)
C++
extern "OS" {
int DSNALI(
char * functn,
...); }
PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE);
TRANSLATE is useful only after an OPEN fails, and then only if you used an
explicit CONNECT before the OPEN request. For errors that occur during SQL or
IFI requests, the TRANSLATE function performs automatically.
“DSNALI TRANSLATE function” shows the syntax for the TRANSLATE function.
The TRANSLATE function can translate those codes beginning with X'00F3', but it
does not translate CAF reason codes beginning with X'00C1'. If you receive error
reason code X'00F30040' (resource unavailable) after an OPEN request,
TRANSLATE returns the name of the unavailable database object in the last 44
characters of field SQLERRM. If the DB2 TRANSLATE function does not recognize
the error reason code, it returns SQLCODE -924 (SQLSTATE ’58006’) and places a
printable copy of the original DB2 function code and the return and error reason
codes in the SQLERRM field. The contents of registers 0 and 15 do not change,
unless TRANSLATE fails; in which case, register 0 is set to X'C10205' and register
15 to 200.
In the table, an error shows as Error nnn. The corresponding reason code is
X'00C10'nnn; the message number is DSNAnnnI or DSNAnnnE. For a list of reason
codes, see “CAF return codes and reason codes” on page 822.
Table 139. Effects of CAF calls, as dependent on connection history
Next function
Previous
function CONNECT OPEN SQL CLOSE DISCONNECT TRANSLATE
Empty: first call CONNECT OPEN CONNECT, Error 203 Error 204 Error 205
OPEN,
followed by the
SQL or IFI call
Chapter 30. Programming for the call attachment facility (CAF) 819
Table 139. Effects of CAF calls, as dependent on connection history (continued)
Next function
Previous
function CONNECT OPEN SQL CLOSE DISCONNECT TRANSLATE
CONNECT Error 201 OPEN OPEN, Error 203 DISCONNECT TRANSLATE
followed by the
SQL or IFI call
CONNECT Error 201 Error 202 The SQL or IFI CLOSE1 DISCONNECT TRANSLATE
followed by call
OPEN
CONNECT Error 201 Error 202 The SQL or IFI CLOSE1 DISCONNECT TRANSLATE
followed by call
SQL or IFI call
OPEN Error 201 Error 202 The SQL or IFI CLOSE2 Error 204 TRANSLATE
call
SQL or IFI call Error 201 Error 202 The SQL or IFI CLOSE2 Error 204 TRANSLATE3
call
Notes:
1. The task and address space connections remain active. If CLOSE fails because DB2 was down, then the CAF
control blocks are reset, the function produces return code 4 and reason code XX'00C10824', and CAF is ready
for more connection requests when DB2 is again on line.
2. A TRANSLATE request is accepted, but in this case it is redundant. CAF automatically issues a TRANSLATE
request when an SQL or IFI request fails.
Sample scenarios
This section shows sample scenarios for connecting tasks to DB2.
A task can have a connection to one and only one DB2 subsystem at any point in
time. A CAF error occurs if the subsystem name on OPEN does not match the one
on CONNECT. To switch to a different subsystem, the application must disconnect
from the current subsystem, then issue a connect request specifying a new
subsystem name.
Several tasks
In this scenario, multiple tasks within the address space are using DB2 services.
Each task must explicitly specify the same subsystem name on either the
CONNECT or OPEN function request. Task 1 makes no SQL or IFI calls. Its
purpose is to monitor the DB2 termination and startup ECBs, and to check the DB2
release level.
TASK 1 TASK 2 TASK 3 TASK n
CONNECT
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
DISCONNECT
The call attachment facility has no attention exit routines. You can provide your own
if necessary. However, DB2 uses enabled unlocked task (EUT) functional recovery
routines (FRRs), so if you request attention while DB2 code is running, your routine
may not get control.
Chapter 30. Programming for the call attachment facility (CAF) 821
Recovery routines
The call attachment facility has no abend recovery routines.
Your program can provide an abend exit routine. It must use tracking indicators to
determine if an abend occurred during DB2 processing. If an abend occurs while
DB2 has control, you have these choices:
v Allow task termination to complete. Do not retry the program. DB2 detects task
termination and terminates the thread with the ABRT parameter. You lose all
database changes back to the last SYNC or COMMIT point.
This is the only action that you can take for abends that CANCEL or DETACH
cause. You cannot use additional SQL statements at this point. If you attempt to
execute another SQL statement from the application program or its recovery
routine, a return code of +256 and a reason code of X’00F30083’ occurs.
v In an ESTAE routine, issue CLOSE with the ABRT parameter followed by
DISCONNECT. The ESTAE exit routine can retry so that you do not need to
reinstate the application task.
Standard z/OS functional recovery routines (FRRs) can cover only code running in
service request block (SRB) mode. Because DB2 does not support calls from SRB
mode routines, you can use only enabled unlocked task (EUT) FRRs in your
routines that call DB2.
Do not have an EUT FRR active when using CAF, processing SQL requests, or
calling IFI.
An EUT FRR can be active, but it cannot retry failing DB2 requests. An EUT FRR
retry bypasses DB2’s ESTAE routines. The next DB2 request of any type, including
DISCONNECT, fails with a return code of +256 and a reason code of X'00F30050'.
With z/OS, if you have an active EUT FRR, all DB2 requests fail, including the initial
CONNECT or OPEN. The requests fail because DB2 always creates an ARR-type
ESTAE, and z/OS does not allow the creation of ARR-type ESTAEs when an FRR
is active.
When the reason code begins with X'00F3' (except for X'00F30006'), you can use
the CAF TRANSLATE function to obtain error message text that can be printed and
displayed.
Table 140 shows the CAF return codes and reason codes.
Table 140. CAF return codes and reason codes
Return code Reason code Explanation
0 X'00000000' Successful completion.
4 X'00C10823' Release level mismatch between DB2 and the and the call
attachment facility code.
4 X'00C10824' CAF reset complete. Ready to make a new connection.
1
200 X'00C10201' Received a second CONNECT from the same TCB. The first
CONNECT could have been implicit or explicit.
2001 X'00C10202' Received a second OPEN from the same TCB. The first OPEN could
have been implicit or explicit.
2001 X'00C10203' CLOSE issued when there was no active OPEN.
1
200 X'00C10204' DISCONNECT issued when there was no active CONNECT, or the
AXSET macro was issued between CONNECT and DISCONNECT.
2001 X'00C10205' TRANSLATE issued when there was no connection to DB2.
1
200 X'00C10206' Wrong number of parameters or the end-of-list bit was off.
1
200 X'00C10207' Unrecognized function parameter.
1
200 X'00C10208' Received requests to access two different DB2 subsystems from the
same TCB.
2
204 CAF system error. Probable error in the attach or DB2.
Notes:
1. A CAF error probably caused by errors in the parameter lists coming from application programs. CAF errors do not
change the current state of your connection to DB2; you can continue processing with a corrected request.
2. System errors cause abends. For an explanation of the abend reason codes, see Part 3 of DB2 Messages and
Codes. If tracing is on, a descriptive message is written to the DSNTRACE data set just before the abend.
Program examples
The following pages contain sample JCL and assembler programs that access the
call attachment facility (CAF).
Chapter 30. Programming for the call attachment facility (CAF) 823
// DD DSN=DB2_load_library
.
.
.
//SYSPRINT DD SYSOUT=*
//DSNTRACE DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
These code segments assume the existence of a WRITE macro. Anywhere you find
this macro in the code is a good place for you to substitute code of your own. You
must decide what you want your application to do in those situations; you probably
do not want to write the error messages shown.
When the module is done with DB2, you should delete the entries.
****************************** GET LANGUAGE INTERFACE ENTRY ADDRESSES
LOAD EP=DSNALI Load the CAF service request EP
ST R0,LIALI Save this for CAF service requests
LOAD EP=DSNHLI2 Load the CAF SQL call Entry Point
ST R0,LISQL Save this for SQL calls
* .
* . Insert connection service requests and SQL calls here
* .
DELETE EP=DSNALI Correctly maintain use count
DELETE EP=DSNHLI2 Correctly maintain use count
The code does not show a task that waits on the DB2 termination ECB. If you like,
you can code such a task and use the z/OS WAIT macro to monitor the ECB. You
probably want this task to detach the sample code if the termination ECB is posted.
That task can also wait on the DB2 startup ECB. This sample waits on the startup
ECB at its own task level.
On entry, the code assumes that certain variables are already set:
Variable Usage
LIALI The entry point that handles DB2 connection service requests.
LISQL The entry point that handles SQL calls.
SSID The DB2 subsystem identifier.
TECB The address of the DB2 termination ECB.
SECB The address of the DB2 startup ECB.
RIBPTR A fullword that CAF sets to contain the RIB address.
Chapter 30. Programming for the call attachment facility (CAF) 825
PLAN The plan name to use on the OPEN call.
CONTROL Used to shut down processing because of unsatisfactory return or
reason codes. Subroutine CHEKCODE sets CONTROL.
CAFCALL List-form parameter area for the CALL macro.
***********************************************************************
* CHEKCODE PSEUDOCODE *
***********************************************************************
*IF TECB is POSTed with the ABTERM or FORCE codes
* THEN
* CONTROL = ’SHUTDOWN’
* WRITE ’DB2 found FORCE or ABTERM, shutting down’
* ELSE /* Termination ECB was not POSTed */
* SELECT (RETCODE) /* Look at the return code */
* WHEN (0) ; /* Do nothing; everything is OK */
* WHEN (4) ; /* Warning */
* SELECT (REASCODE) /* Look at the reason code */
* WHEN (’00C10823’X) /* DB2 / CAF release level mismatch*/
* WRITE ’Found a mismatch between DB2 and CAF release levels’
* WHEN (’00C10824’X) /* Ready for another CAF call */
* CONTROL = ’RESTART’ /* Start over, from the top */
* OTHERWISE
* WRITE ’Found unexpected R0 when R15 was 4’
* CONTROL = ’SHUTDOWN’
* END INNER-SELECT
* WHEN (8,12) /* Connection failure */
* SELECT (REASCODE) /* Look at the reason code */
* WHEN (’00F30002’X, /* These mean that DB2 is down but */
* ’00F30012’X) /* will POST SECB when up again */
* DO
* WRITE ’DB2 is unavailable. I’ll tell you when it’s up.’
* WAIT SECB /* Wait for DB2 to come up */
* WRITE ’DB2 is now available.’
* END
* /**********************************************************/
* /* Insert tests for other DB2 connection failures here. */
* /* CAF Externals Specification lists other codes you can */
* /* receive. Handle them in whatever way is appropriate */
* /* for your application. */
* /**********************************************************/
* OTHERWISE /* Found a code we’re not ready for*/
* WRITE ’Warning: DB2 connection failure. Cause unknown’
* CALL DSNALI (’TRANSLATE’,SQLCA) /* Fill in SQLCA */
* WRITE SQLCODE and SQLERRM
* END INNER-SELECT
* WHEN (200)
* WRITE ’CAF found user error. See DSNTRACE dataset’
* WHEN (204)
* WRITE ’CAF system error. See DSNTRACE data set’
* OTHERWISE
* CONTROL = ’SHUTDOWN’
* WRITE ’Got an unrecognized return code’
* END MAIN SELECT
* IF (RETCODE > 4) THEN /* Was there a connection problem?*/
* CONTROL = ’SHUTDOWN’
* END CHEKCODE
Figure 224. Subroutine to check return codes from CAF and DB2, in assembler (Part 1 of 3)
Figure 224. Subroutine to check return codes from CAF and DB2, in assembler (Part 2 of 3)
Chapter 30. Programming for the call attachment facility (CAF) 827
CLC REASCODE,F30012 Hunt for X’00F30012’
BE DB2DOWN
WRITE ’DB2 connection failure with an unrecognized REASCODE’
CLC SQLCODE,ZERO See if we need TRANSLATE
BNE A4TRANS If not blank, skip TRANSLATE
* ********************* TRANSLATE unrecognized RETCODEs ********
WRITE ’SQLCODE 0 but R15 not, so TRANSLATE to get SQLCODE’
L R15,LIALI Get the Language Interface address
CALL (15),(TRANSLAT,SQLCA),VL,MF=(E,CAFCALL)
C R0,C10205 Did the TRANSLATE work?
BNE A4TRANS If not C10205, SQLERRM now filled in
WRITE ’Not able to TRANSLATE the connection failure’
B ENDCCODE Go to end of CHEKCODE
A4TRANS DS 0H SQLERRM must be filled in to get here
* Note: your code should probably remove the X’FF’
* separators and format the SQLERRM feedback area.
* Alternatively, use DB2 Sample Application DSNTIAR
* to format a message.
WRITE ’SQLERRM is:’ SQLERRM
B ENDCCODE We are done. Go to end of CHEKCODE
DB2DOWN DS 0H Hunt return code of 200
WRITE ’DB2 is down and I will tell you when it comes up’
WAIT ECB=SECB Wait for DB2 to come up
WRITE ’DB2 is now available’
MVC CONTROL,RESTART Indicate that we should re-CONNECT
B ENDCCODE
* ********************* HUNT FOR 200 ***************************
HUNT200 DS 0H Hunt return code of 200
CLC RETCODE,NUM200 Hunt 200
BNE HUNT204
WRITE ’CAF found user error, see DSNTRACE data set’
B ENDCCODE We are done. Go to end of CHEKCODE
* ********************* HUNT FOR 204 ***************************
HUNT204 DS 0H Hunt return code of 204
CLC RETCODE,NUM204 Hunt 204
BNE WASSAT If not 204, got strange code
WRITE ’CAF found system error, see DSNTRACE data set’
B ENDCCODE We are done. Go to end of CHEKCODE
* ********************* UNRECOGNIZED RETCODE *******************
WASSAT DS 0H
WRITE ’Got an unrecognized RETCODE’
MVC CONTROL,SHUTDOWN Shutdown
BE ENDCCODE We are done. Go to end of CHEKCODE
ENDCCODE DS 0H Should we shut down?
L R4,RETCODE Get a copy of the RETCODE
C R4,FOUR Have a look at the RETCODE
BNH BYEBYE If RETCODE <= 4 then leave CHEKCODE
MVC CONTROL,SHUTDOWN Shutdown
BYEBYE DS 0H Wrap up and leave CHEKCODE
L R13,4(,R13) Point to caller’s save area
RETURN (14,12) Return to the caller
Figure 224. Subroutine to check return codes from CAF and DB2, in assembler (Part 3 of 3)
In the example that follows, LISQL is addressable because the calling CSECT used
the same register 12 as CSECT DSNHLI. Your application must also establish
addressability to LISQL.
***********************************************************************
* Subroutine DSNHLI intercepts calls to LI EP=DSNHLI
***********************************************************************
DS 0D
DSNHLI CSECT Begin CSECT
STM R14,R12,12(R13) Prologue
LA R15,SAVEHLI Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
L R15,LISQL Get the address of real DSNHLI
BASSM R14,R15 Branch to DSNALI to do an SQL call
* DSNALI is in 31-bit mode, so use
* BASSM to assure that the addressing
* mode is preserved.
L R13,4(,R13) Restore R13 (caller’s save area addr)
L R14,12(,R13) Restore R14 (return address)
RETURN (1,12) Restore R1-12, NOT R0 and R15 (codes)
Variable declarations
Figure 225 on page 830 shows declarations for some of the variables used in the
previous subroutines.
Chapter 30. Programming for the call attachment facility (CAF) 829
****************************** VARIABLES ******************************
SECB DS F DB2 Startup ECB
TECB DS F DB2 Termination ECB
LIALI DS F DSNALI Entry Point address
LISQL DS F DSNHLI2 Entry Point address
SSID DS CL4 DB2 Subsystem ID. CONNECT parameter
PLAN DS CL8 DB2 Plan name. OPEN parameter
TRMOP DS CL4 CLOSE termination option (SYNC|ABRT)
FUNCTN DS CL12 CAF function to be called
RIBPTR DS F DB2 puts Release Info Block addr here
RETCODE DS F Chekcode saves R15 here
REASCODE DS F Chekcode saves R0 here
CONTROL DS CL8 GO, SHUTDOWN, or RESTART
SAVEAREA DS 18F Save area for CHEKCODE
****************************** CONSTANTS ******************************
SHUTDOWN DC CL8’SHUTDOWN’ CONTROL value: Shutdown execution
RESTART DC CL8’RESTART ’ CONTROL value: Restart execution
CONTINUE DC CL8’CONTINUE’ CONTROL value: Everything OK, cont
CODE0 DC F’0’ SQLCODE of 0
CODE100 DC F’100’ SQLCODE of 100
QUIESCE DC XL3’000008’ TECB postcode: STOP DB2 MODE=QUIESCE
CONNECT DC CL12’CONNECT ’ Name of a CAF service. Must be CL12!
OPEN DC CL12’OPEN ’ Name of a CAF service. Must be CL12!
CLOSE DC CL12’CLOSE ’ Name of a CAF service. Must be CL12!
DISCON DC CL12’DISCONNECT ’ Name of a CAF service. Must be CL12!
TRANSLAT DC CL12’TRANSLATE ’ Name of a CAF service. Must be CL12!
SYNC DC CL4’SYNC’ Termination option (COMMIT)
ABRT DC CL4’ABRT’ Termination option (ROLLBACK)
****************************** RETURN CODES (R15) FROM CALL ATTACH ****
ZERO DC F’0’ 0
FOUR DC F’4’ 4
EIGHT DC F’8’ 8
TWELVE DC F’12’ 12 (Call Attach return code in R15)
NUM200 DC F’200’ 200 (User error)
NUM204 DC F’204’ 204 (Call Attach system error)
****************************** REASON CODES (R00) FROM CALL ATTACH ****
C10205 DC XL4’00C10205’ Call attach could not TRANSLATE
C10823 DC XL4’00C10823’ Call attach found a release mismatch
C10824 DC XL4’00C10824’ Call attach ready for more input
F30002 DC XL4’00F30002’ DB2 subsystem not up
F30011 DC XL4’00F30011’ DB2 subsystem not up
F30012 DC XL4’00F30012’ DB2 subsystem not up
F30025 DC XL4’00F30025’ DB2 is stopping (REASCODE)
*
* Insert more codes here as necessary for your application
*
****************************** SQLCA and RIB **************************
EXEC SQL INCLUDE SQLCA
DSNDRIB Get the DB2 Release Information Block
****************************** CALL macro parm list *******************
CAFCALL CALL ,(*,*,*,*,*,*,*,*,*),VL,MF=L
Prerequisite knowledge: Before you consider using RRSAF, you must be familiar
with the following z/OS topics:
v The CALL macro and standard module linkage conventions
v Program addressing and residency options (AMODE and RMODE)
v Creating and controlling tasks; multitasking
v Functional recovery facilities such as ESTAE, ESTAI, and FRRs
v Synchronization techniques such as WAIT/POST.
v z/OS RRS functions, such as SRRCMIT and SRRBACK.
Task capabilities
Any task in an address space can establish a connection to DB2 through RRSAF.
Specifying a plan for a task: Each connected task can run a plan. Tasks within a
single address space can specify the same plan, but each instance of a plan runs
independently from the others. A task can terminate its plan and run a different plan
without completely breaking its connection to DB2.
Providing attention processing exits and recovery routines: RRSAF does not
generate task structures, and it does not provide attention processing exits or
functional recovery routines. You can provide whatever attention handling and
functional recovery your application needs, but you must use ESTAE/ESTAI type
recovery routines only.
Programming language
You can write RRSAF applications in assembler language, C, COBOL, Fortran, and
PL/I. When choosing a language to code your application in, consider these
restrictions:
v If you use z/OS macros (ATTACH, WAIT, POST, and so on), you must choose a
programming language that supports them.
v The RRSAF TRANSLATE function is not available from Fortran. To use the
function, code it in a routine written in another language, and then call that
routine from Fortran.
Tracing facility
A tracing facility provides diagnostic messages that help you debug programs and
diagnose errors in the RRSAF code. The trace information is available only in a
SYSABEND or SYSUDUMP dump.
Program preparation
Preparing your application program to run in RRSAF is similar to preparing it to run
in other environments, such as CICS, IMS, and TSO. You can prepare an RRSAF
application either in the batch environment or by using the DB2 program
preparation process. You can use the program preparation system either through
DB2I or through the DSNH CLIST. For examples and guidance in program
preparation, see Chapter 21, “Preparing an application program to run,” on page
453.
RRSAF requirements
When you write an application to use RRSAF, be aware of the following
characteristics.
Program size
The RRSAF code requires about 10-KB of virtual storage per address space and an
additional 10-KB for each TCB that uses RRSAF.
Use of LOAD
RRSAF uses z/OS SVC LOAD to load a module as part of the initialization following
your first service request. The module is loaded into fetch-protected storage that
Follow these guidelines for choosing the DB2 statements or the CPIC functions for
commit and rollback operations:
v Use DB2 COMMIT and ROLLBACK statements when you know that the following
conditions are true:
– The only recoverable resource accessed by your application is DB2 data
managed by a single DB2 instance.
DB2 COMMIT and ROLLBACK statements will fail if your RRSAF application
accesses recoverable resources other than DB2 data that is managed by a
single DB2 instance.
– The address space from which syncpoint processing is initiated is the same
as the address space that is connected to DB2.
v If your application accesses other recoverable resources, or syncpoint processing
and DB2 access are initiated from different address spaces, use SRRCMIT and
SRRBACK.
Run environment
Applications that request DB2 services must adhere to several run environment
requirements. Those requirements must be met regardless of the attachment facility
you use. They are not unique to RRSAF.
v The application must be running in TCB mode.
v No EUT FRRs can be active when the application requests DB2 services. If an
EUT FRR is active, DB2’s functional recovery can fail, and your application can
receive unpredictable abends.
v Different attachment facilities cannot be active concurrently within the same
address space. For example:
– An application should not use RRSAF in CICS or IMS address spaces.
– An application running in an address space that has a CAF connection to DB2
cannot connect to DB2 using RRSAF.
– An application running in an address space that has an RRSAF connection to
DB2 cannot connect to DB2 using CAF.
v One attachment facility cannot start another. This means your RRSAF application
cannot use DSN, and a DSN RUN subcommand cannot call your RRSAF
application.
v The language interface module for RRSAF, DSNRLI, is shipped with the linkage
attributes AMODE(31) and RMODE(ANY). If your applications load RRSAF below
the 16-MB line, you must link-edit DSNRLI again.
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 833
How to use RRSAF
To use RRSAF, you must first make available the RRSAF language interface load
module, DSNRLI. For information about loading or link-editing this module, see
“Accessing the RRSAF language interface” on page 836.
| When the language interface is available, your program can make use of the
| RRSAF in two ways:
| v Implicitly, by including SQL statements or IFI calls in your program just as you
| would in any program. The RRSAF facility establishes the connections to DB2
| using default values for the pertinent parameters as described in “Implicit
| connections” on page 835.
| v Explicitly, by issuing CALL DSNRLI statements with the appropriate options. For
| the general form of the statements, see “RRSAF function descriptions” on page
| 840.
The first element of each option list is a function, which describes the action you
want RRSAF to take. For a list of available functions and what they do, see
“Summary of connection functions.” The effect of any function depends in part on
what functions the program has already performed. Before using any function, be
sure to read the description of its usage. Also read “Summary of connection
functions,” which describes the influence of previously invoked functions.
| Implicit connections
| If you do not explicitly specify the IDENTIFY function in a CALL DSNRLI statement,
| RRSAF initiates an implicit connection to DB2 if the application includes SQL
| statements or IFI calls. An implicit connection causes RRSAF to initiate implicit
| IDENTIFY and CREATE THREAD requests to DB2. Although RRSAF performs the
| connection request by using the following default values, the request is subject to
| the same DB2 return codes and reason codes as are explicitly specified requests.
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 835
| For an implicit connection request, your application should not explicitly specify
| either IDENTIFY or CREATE THREAD. It can execute other explicit RRSAF calls
| after the implicit connection. An implicit connection does not perform any SIGNON
| processing. Your application can execute SIGNON at any point of consistency. To
| terminate an implicit connection, you must use the proper calls. See “Summary of
| RRSAF behavior” on page 865 for details.
| For implicit connection requests, register 15 contains the return code, and register 0
| contains the reason code. The return code and reason code are also in the
| message text for SQLCODE -981. The application program should examine the
| return and reason codes immediately after the first executable SQL statement within
| the application program. Two ways to do this are to:
| v Examine registers 0 and 15 directly.
| v Examine the SQLCA, and if the SQLCODE is -981, obtain the return and reason
| code from the message text. The return code is the first token, and the reason
| code is the second token.
| If the implicit connection is successful, the application can examine the SQLCODE
| for the first, and subsequent, SQL statements.
DSNWLI (dummy
application
entry point)
Part of RRSAF is a DB2 load module, DSNRLI, the RRSAF language interface
module. DSNRLI has the alias names DSNHLIR and DSNWLIR. The module has
five entry points: DSNRLI, DSNHLI, DSNHLIR, DSNWLI, and DSNWLIR:
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 837
v Entry point DSNRLI handles explicit DB2 connection service requests.
v DSNHLI and DSNHLIR handle SQL calls. Use DSNHLI if your application
program link-edits RRSAF; use DSNHLIR if your application program loads
RRSAF.
v DSNWLI and DSNWLIR handle IFI calls. Use DSNWLI if your application
program link-edits RRSAF; use DSNWLIR if your application program loads
RRSAF.
You can access the DSNRLI module by explicitly issuing LOAD requests when your
program runs, or by including the DSNRLI module in your load module when you
link-edit your program. There are advantages and disadvantages to each approach.
By explicitly loading the DSNRLI module, you can isolate the maintenance of your
application from future IBM maintenance to the language interface. If the language
interface changes, the change will probably not affect your load module.
You must indicate to DB2 which entry point to use. You can do this in one of two
ways:
v Specify the precompiler option ATTACH(RRSAF).
This causes DB2 to generate calls that specify entry point DSNHLIR. You cannot
use this option if your application is written in Fortran.
v Code a dummy entry point named DSNHLI within your load module.
If you do not specify the precompiler option ATTACH, the DB2 precompiler
generates calls to entry point DSNHLI for each SQL request. The precompiler
does not know and is independent of the different DB2 attachment facilities.
When the calls generated by the DB2 precompiler pass control to DSNHLI, your
code corresponding to the dummy entry point must preserve the option list
passed in register 1 and call DSNHLIR specifying the same option list. For a
coding example of a dummy DSNHLI entry point, see “Using dummy entry point
DSNHLI” on page 869.
Link-editing DSNRLI
You can include DSNRLI when you link-edit your load module. For example, you
can use a linkage editor control statement like this in your JCL:
INCLUDE DB2LIB(DSNRLI).
By coding this statement, you avoid linking the wrong language interface module.
When you include DSNRLI during the link-edit, you do not include a dummy
DSNHLI entry point in your program or specify the precompiler option ATTACH.
Module DSNRLI contains an entry point for DSNHLI, which is identical to DSNHLIR,
and an entry point DSNWLI, which is identical to DSNWLIR.
A disadvantage of link-editing DSNRLI into your load module is that if IBM makes a
change to DSNRLI, you must link-edit your program again.
RRSAF relies on the z/OS System Authorization Facility (SAF) and a security
product, such as RACF, to verify and authorize the authorization IDs. An application
that connects to DB2 through RRSAF must pass those identifiers to SAF for
verification and authorization checking. RRSAF retrieves the identifiers from SAF.
A location can provide an authorization exit routine for a DB2 connection to change
the authorization IDs and to indicate whether the connection is allowed. The actual
values assigned to the primary and secondary authorization IDs can differ from the
values provided by a SIGNON or AUTH SIGNON request. A site’s DB2 signon exit
routine can access the primary and secondary authorization IDs and can modify the
IDs to satisfy the site’s security requirements. The exit can also indicate whether
the signon request should be accepted.
For information about authorization IDs and the connection and signon exit routines,
see Appendix B (Volume 2) of DB2 Administration Guide.
Do not mix RRSAF connections with other connection types in a single address
space. The first connection to DB2 made from an address space determines the
type of connection allowed.
Task termination
If an application that is connected to DB2 through RRSAF terminates normally
before the TERMINATE THREAD or TERMINATE IDENTIFY functions deallocate
the plan, then RRS commits any changes made after the last commit point.
In either case, DB2 deallocates the plan, if necessary, and terminates the
application’s connection.
DB2 abend
If DB2 abends while an application is running, DB2 rolls back changes to the last
commit point. If DB2 terminates while processing a commit request, DB2 either
commits or rolls back any changes at the next restart. The action taken depends on
the state of the commit request when DB2 terminates.
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 839
RRSAF function descriptions
To code RRSAF functions in C, COBOL, Fortran, or PL/I, follow the individual
language’s rules for making calls to assembler language routines. Specify the return
code and reason code parameters in the parameter list for each RRSAF call.
Register conventions
Table 141 summarizes the register conventions for RRSAF calls.
If you do not specify the return code and reason code parameters in your RRSAF
calls, RRSAF puts a return code in register 15 and a reason code in register 0. If
you specify the return code and reason code parameters, RRSAF places the return
code in register 15 and in the return code parameter to accommodate high-level
languages that support special return code processing. RRSAF preserves the
contents of registers 2 through 14.
Table 141. Register conventions for RRSAF calls
Register Usage
R1 Parameter list pointer
R13 Address of caller’s save area
R14 Caller’s return address
R15 RRSAF entry point address
In an assembler language call, code a comma for a parameter in the CALL DSNRLI
statement when you want to use the default value for that parameter and specify
subsequent parameters. For example, code an IDENTIFY call like this to specify all
parameters except the return code parameter:
CALL DSNRLI,(IDFYFN,SSNM,RIBPTR,EIBPTR,TERMECB,STARTECB,,REASCODE)
For all languages: When you code CALL DSNRLI statements in any language,
specify all parameters that come before the return code parameter. You cannot omit
any of those parameters by coding zeros or blanks. There are no defaults for those
parameters.
For all languages except assembler language: Code 0 for an optional parameter
in the CALL DSNRLI statement when you want to use the default value for that
parameter but specify subsequent parameters. For example, suppose you are
coding an IDENTIFY call in a COBOL program. You want to specify all parameters
except the return code parameter. Write the call in this way:
CALL ’DSNRLI’ USING IDFYFN SSNM RIBPTR EIBPTR TERMECB STARTECB
BY CONTENT ZERO BY REFERENCE REASCODE.
“DSNRLI IDENTIFY function” shows the syntax for the IDENTIFY function.
)
,retcode
,reascode
,groupoverride
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 841
If the EIB is not available (for example, if ssnm names a subsystem that does
not exist), RRSAF sets the 4-byte area to zeros.
The area to which eibptr points is above the 16-MB line.
This parameter is required, although the application does not need to refer to
the returned information.
termecb
The address of the application’s event control block (ECB) used for DB2
termination. DB2 posts this ECB when the system operator enters the
command STOP DB2 or when DB2 is terminating abnormally. Specify a value
of 0 if you do not want to use a termination ECB.
RRSAF puts a POST code in the ECB to indicate the type of termination as
shown in Table 142.
Table 142. Post codes for types of DB2 termination
POST code Termination type
8 QUIESCE
12 FORCE
16 ABTERM
startecb
The address of the application’s startup ECB. If DB2 has not started when the
application issues the IDENTIFY call, DB2 posts the ECB when DB2 startup
has completed. Enter a value of zero if you do not want to use a startup ECB.
DB2 posts a maximum of one startup ECB per address space. The ECB posted
is associated with the most recent IDENTIFY call from that address space. The
application program must examine any nonzero RRSAF or DB2 reason codes
before issuing a WAIT on this ECB.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places a reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode or its default (by
specifying a comma or zero, depending on the language).
groupoverride
An 8-byte area that the application provides. This field is optional. If this field is
provided, it contains the string ’NOGROUP’. This string indicates that the
subsystem name that is specified by ssnm is to be used as a DB2 subsystem
name, even if ssnm matches a group attachment name. If groupoverride is not
provided, ssnm is used as the group attachment name if it matches a group
attachment name. If you specify this parameter in any language except
assembler, you must also specify the return code and reason code parameters.
In assembler language, you can omit the return code and reason code
parameters by specifying commas as place-holders.
During IDENTIFY processing, DB2 determines whether the user address space is
authorized to connect to DB2. DB2 invokes the z/OS SAF and passes a primary
authorization ID to SAF. That authorization ID is the 7-byte user ID associated with
the address space, unless an authorized function has built an ACEE for the address
space. If an authorized function has built an ACEE, DB2 passes the 8-byte user ID
from the ACEE. SAF calls an external security product, such as RACF, to determine
if the task is authorized to use:
v The DB2 resource class (CLASS=DSNR)
v The DB2 subsystem (SUBSYS=ssnm)
v Connection type RRSAF
If that check is successful, DB2 calls the DB2 connection exit to perform additional
verification and possibly change the authorization ID. DB2 then sets the connection
name to RRSAF and the connection type to RRSAF.
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 843
SWITCH TO is useful only after a successful IDENTIFY call. If you have established
a connection with one DB2 subsystem, then you must issue SWITCH TO before
you make an IDENTIFY call to another DB2 subsystem.
“DSNRLI SWITCH TO function” shows the syntax for the SWITCH TO function.
)
, retcode
, reascode
, groupoverride
This example shows how you can use SWITCH TO to interact with three DB2
subsystems.
RRSAF calls for subsystem db21:
IDENTIFY
SIGNON
CREATE THREAD
Execute SQL on subsystem db21
SWITCH TO db22
RRSAF calls on subsystem db22:
IDENTIFY
SIGNON
CREATE THREAD
Execute SQL on subsystem db22
SWITCH TO db23
RRSAF calls on subsystem db23:
IDENTIFY
SIGNON
CREATE THREAD
Execute SQL on subsystem 23
SWITCH TO db21
Execute SQL on subsystem 21
SWITCH TO db22
Execute SQL on subsystem 22
SWITCH TO db21
Execute SQL on subsystem 21
SRRCMIT (to commit the UR)
SWITCH TO db23
Execute SQL on subsystem 23
SWITCH TO db22
Execute SQL on subsystem 22
SWITCH TO db21
Execute SQL on subsystem 21
SRRCMIT (to commit the UR)
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 845
Table 144. Examples of RRSAF SWITCH TO calls (continued)
Language Call example
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in
your C, C++, and PL/I applications:
C #pragma linkage(dsnrli, OS)
C++ extern ″OS″ {
int DSNRLI(
char * functn,
...); }
PL/I DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
“DSNRLI SIGNON function” shows the syntax for the SIGNON function.
)
, retcode
, reascode
, user
, appl
, ws
, xid
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 847
becomes part of the associated global transaction. Otherwise,
RRS generates a new global transaction ID. The 1 value must
be specified as a binary integer.
address The 4-byte address of an area into which you enter a global
transaction ID for the thread. If the global transaction ID already
exists, the thread becomes part of the associated global
transaction. Otherwise, RRS creates a new global transaction
with the ID that you specify.
A DB2 thread that is part of a global transaction can share locks with other DB2
threads that are part of the same global transaction and can access and modify
the same data. A global transaction exists until one of the threads that is part of
the global transaction is committed or rolled back.
See z/OS Security Server RACF Macros and Interfaces for more information about
the RACROUTE macro.
Generally, you issue a SIGNON call after an IDENTIFY call and before a CREATE
THREAD call. You can also issue a SIGNON call if the application is at a point of
consistency, and one of the following conditions is true:
v The value of reuse in the CREATE THREAD call was RESET.
v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If there are open held cursors or the
package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted
only if the primary authorization ID has not changed.
“DSNRLI AUTH SIGNON function” shows the syntax for the AUTH SIGNON
function.
)
, retcode
, reascode
, user
, appl
, ws
, xid
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 849
correlation ID to correlate work units. This token appears in output from the
command DISPLAY THREAD. If you do not want to specify a correlation ID, fill
the 12-byte area with blanks.
accounting-token
A 22-byte area in which you can put a value for a DB2 accounting token. This
| value is displayed in DB2 accounting and statistics trace records. Setting the
| value of the accounting token sets the value of the CLIENT ACCTG special
| register. If you do not want to specify an accounting token, fill the 22-byte area
with blanks.
accounting-interval
A 6-byte area with which you can control when DB2 writes an accounting
record. If you specify COMMIT in that area, then DB2 writes an accounting
record each time the application issues SRRCMIT. If you specify any other
value, DB2 writes an accounting record when the application terminates or
when you call SIGNON with a new authorization ID.
primary-authid
An 8-byte area in which you can put a primary authorization ID. If you are not
passing the authorization ID to DB2 explicitly, put X'00' or a blank in the first
byte of the area.
ACEE-address
The 4-byte address of an ACEE that you pass to DB2. If you do not want to
provide an ACEE, specify 0 in this field.
secondary-authid
An 8-byte area in which you can put a secondary authorization ID. If you do not
pass the authorization ID to DB2 explicitly, put X'00' or a blank in the first byte
of the area. If you enter a secondary authorization ID, you must also enter a
primary authorization ID.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
user
A 16-byte area that contains the user ID of the client end user. You can use this
parameter to provide the identity of the client end user for accounting and
monitoring purposes. DB2 displays this user ID in DISPLAY THREAD output
| and in DB2 accounting and statistics trace records. Setting the user ID sets the
| value of the CURRENT CLIENT_USERID special register. If user is less than
16 characters long, you must pad it on the right with blanks to a length of 16
characters.
This field is optional. If specified, you must also specify retcode and reascode. If
not specified, no user ID is associated with the connection. You can omit this
parameter by specifying a value of 0.
appl
A 32-byte area that contains the application or transaction name of the end
A DB2 thread that is part of a global transaction can share locks with other DB2
threads that are part of the same global transaction and can access and modify
the same data. A global transaction exists until one of the threads that is part of
the global transaction is committed or rolled back.
Generally, you issue an AUTH SIGNON call after an IDENTIFY call and before a
CREATE THREAD call. You can also issue an AUTH SIGNON call if the application
is at a point of consistency, and one of the following conditions is true:
v The value of reuse in the CREATE THREAD call was RESET.
v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If there are open held cursors or the
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 851
package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted
only if the primary authorization ID has not changed.
“DSNRLI CONTEXT SIGNON function” shows the syntax for the CONTEXT
SIGNON function.
CALL DSNRLI (
)
, retcode
, reascode
, user
, appl
, ws
, xid
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 853
This field is optional. If specified, you must also specify retcode, reascode, and
user. If not specified, no application or transaction is associated with the
connection. You can omit this parameter by specifying a value of 0.
ws An 18-byte area that contains the workstation name of the client end user. You
can use this parameter to provide the identity of the client end user for
accounting and monitoring purposes. DB2 displays the workstation name in the
DISPLAY THREAD output and in DB2 accounting and statistics trace records.
Setting the workstation name sets the value of the CURRENT
CLIENT_WRKSTNNAME special register. If ws is less than 18 characters long,
you must pad it on the right with blanks to a length of 18 characters.
This field is optional. If specified, you must also specify retcode, reascode, user,
and appl. If not specified, no workstation name is associated with the
connection.
xid
A 4-byte area into which you put one of the following values:
0 Indicates that the thread is not part of a global transaction. The
0 value must be specified as a binary integer.
1 Indicates that the thread is part of a global transaction and that
DB2 should retrieve the global transaction ID from RRS. If a
global transaction ID already exists for the task, the thread
becomes part of the associated global transaction. Otherwise,
RRS generates a new global transaction ID. The 1 value must
be specified as a binary integer.
address The 4-byte address of an area into which you enter a global
transaction ID for the thread. If the global transaction ID already
exists, the thread becomes part of the associated global
transaction. Otherwise, RRS creates a new global transaction
with the ID that you specify. The global transaction ID has the
format shown in Table 145 on page 848.
A DB2 thread that is part of a global transaction can share locks with other DB2
threads that are part of the same global transaction and can access and modify
the same data. A global transaction exists until one of the threads that is part of
the global transaction is committed or rolled back.
Usage: CONTEXT SIGNON relies on the RRS context services functions Set
Context Data (CTXSDTA) and Retrieve Context Data (CTXRDTA). Before you
invoke CONTEXT SIGNON, you must have called CTXSDTA to store a primary
authorization ID and optionally, the address of an ACEE in the context data whose
context key you supply as input to CONTEXT SIGNON.
If the new primary authorization ID is not different than the current primary
authorization ID (established at IDENTIFY time or at a previous SIGNON
invocation), DB2 invokes only the signon exit. If the value has changed, then DB2
establishes a new primary authorization ID and new SQL authorization ID and then
invokes the signon exit.
If you pass an ACEE address, then CONTEXT SIGNON uses the value in
ACEEGRPN as the secondary authorization ID if the length of the group name
(ACEEGRPL) is not 0.
Generally, you issue a CONTEXT SIGNON call after an IDENTIFY call and before a
CREATE THREAD call. You can also issue a CONTEXT SIGNON call if the
application is at a point of consistency, and one of the following conditions is true:
v The value of reuse in the CREATE THREAD call was RESET.
v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If there are open held cursors or the
package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted
only if the primary authorization ID has not changed.
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 855
Table 148. Examples of RRSAF CONTEXT SIGNON calls (continued)
Language Call example
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in
your C, C++, and PL/I applications:
C #pragma linkage(dsnrli, OS)
C++ extern ″OS″ {
int DSNRLI(
char * functn,
...); }
PL/I DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
Usage: SET_ID establishes a new value for program-id that can be used to
identify the end user. The calling program defines the contents of program-id. DB2
places the contents of program-id into IFCID 316 records, along with other
statistics, so that you can identify which program is associated with a particular SQL
statement.
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 857
| user
| A 16-byte area that contains the user ID of the client end user. You can use this
| parameter to provide the identity of the client end user for accounting and
| monitoring purposes. DB2 places this user ID in DISPLAY THREAD output and
| in DB2 accounting and statistics trace records. Setting the user ID sets the
| value of the CURRENT CLIENT_USERID special register. If user is less than
| 16 characters long, you must pad it on the right with blanks to a length of 16
| characters.
| This parameter is optional. You can omit this parameter by specifying a value of
| 0 in the parameter list.
| You can retrieve the client user ID from the CURRENT CLIENT_USERID
| special register.
| appl
| An 32-byte area that contains the application or transaction name of the end
| user’s application. You can use this parameter to provide the identity of the
| client end user for accounting and monitoring purposes. DB2 places the
| application name in DISPLAY THREAD output and in DB2 accounting and
| statistics trace records. Setting the application name sets the value of the
| CURRENT CLIENT_APPLNAME special register. If appl is less than 16
| characters, you must pad it with blanks on the right to a length of 16 characters.
| This parameter is optional. You can omit this parameter by specifying a value of
| 0 in the parameter list.
| You can retrieve the application name from the CURRENT
| CLIENT_APPLNAME special register.
| ws
| An 18-byte area that contains the workstation name of the client end user. You
| can use this parameter to provide the identity of the client end user for
| accounting and monitoring purposes. DB2 places this workstation name in
| DISPLAY THREAD output and in DB2 accounting and statistics trace records.
| Setting the workstation name sets the value of the CURRENT
| CLIENT_WRKSTNNAME special register. If ws is less than 18 characters, you
| must pad it with blanks on the right to a length of 18 characters.
| This parameter is optional. You can omit this parameter by specifying a value of
| 0 in the parameter list.
| You can retrieve the application name from the CURRENT
| CLIENT_WRKSTNNAME special register.
| retcode
| A 4-byte area in which RRSAF places the return code.
| This parameter is optional. If you do not specify this parameter, RRSAF places
| the return code in register 15 and the reason code in register 0.
| reascode
| A 4-byte area in which RRSAF places the reason code.
| This parameter is optional. If you do not specify this parameter, RRSAF places
| the reason code in register 0.
| If you specify this parameter, you must also specify retcode.
| Usage: SET_CLIENT_ID establishes new values for the client user ID, application
| program name, workstation name, and accounting token. The calling program
| defines the contents of these parameters. DB2 places the parameter values in
| DISPLAY THREAD output and in DB2 accounting and statistics trace records.
858 Application Programming and SQL Guide
| Table 150 shows a SET_CLIENT_ID call in each language.
| Table 150. Examples of RRSAF SET_CLIENT_ID calls
| Language Call example
| Assembler CALL DSNRLI,(SECLIDFN,ACCT,USER,APPL,WS,RETCODE,REASCODE)
| C fnret=dsnrli(&seclidfn[0], &acct[0], &user[0], &appl[0], &ws[0], &retcode, &reascode);
| COBOL CALL ’DSNRLI’ USING SECLIDFN ACCT USER APPL WS RETCODE REASCODE.
| Fortran CALL DSNRLI(SECLIDFN,ACCT,USER,APPL,WS,RETCODE,REASCODE)
| PL/I CALL DSNRLI(SECLIDFN,ACCT,USER,APPL,WS,RETCODE,REASCODE);
| Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in
| your C, C++, and PL/I applications:
| C #pragma linkage(dsnrli, OS)
| C++ extern ″OS″ {
| int DSNRLI(
| char * functn,
| ...); }
| PL/I DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
|
“DSNRLI CREATE THREAD function” shows the syntax of the CREATE THREAD
function.
)
, retcode
, reascode
, pklistptr
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 859
If you provide a plan name in the plan field, DB2 ignores the value in this field.
reuse
An 8-byte area that controls the action DB2 takes if a SIGNON call is issued
after a CREATE THREAD call. Specify either of these values in this field:
v RESET - to release any held cursors and reinitialize the special registers
v INITIAL - to disallow the SIGNON
This parameter is required. If the 8-byte area does not contain either RESET or
INITIAL, then the default value is INITIAL.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
pklistptr
A 4-byte field that can contain a pointer to a user-supplied data area that
| contains a list of collection IDs. A collection ID is an SQL identifier of 1 to 128
| letters, digits, or the underscore character that identifies a collection of
| packages. The length of the data area is a maximum of 2050 bytes. The data
| area contains a 2-byte length field, followed by up to 2048 bytes of collection ID
| entries, separated by commas.
When you specify a pointer to a set of collection IDs (in the pklistptr parameter)
and the character ? in the plan parameter, DB2 allocates a plan named
?RRSAF and a package list in the data area that pklistptr points to. If you also
specify a value for the collection parameter, DB2 ignores that value.
Each collection entry must be of the form collection-ID.*, *.collection-ID.*, or
*.*.*. collection-ID and must follow the naming conventions for a collection ID,
as specified in Chapter 1 of DB2 Command Reference.
This parameter is optional. If you specify this parameter, you must also specify
retcode and reascode.
If you provide a plan name in the plan field, DB2 ignores the pklistptr value.
Using a package list can have a negative impact on performance. For better
performance, specify a short package list.
Usage: CREATE THREAD allocates the DB2 resources required to issue SQL or
IFI requests. If you specify a plan name, RRSAF allocates the named plan.
If you specify ? in the first byte of the plan name and provide a collection name,
DB2 allocates a special plan named ?RRSAF and a package list that contains the
following entries:
v The collection name
v An entry that contains * for the location, collection ID, and package name
If you specify ? in the first byte of the plan name and specify pklistptr, DB2 allocates
a special plan named ?RRSAF and a package list that contains the following
entries:
The collection names are used to locate a package associated with the first SQL
statement in the program. The entry that contains *.*.* lets the application access
remote locations and access packages in collections other than the default
collection that is specified at create thread time.
The application can use the SQL statement SET CURRENT PACKAGESET to
change the collection ID that DB2 uses to locate a package.
When DB2 allocates a plan named ?RRSAF, DB2 checks authorization to execute
the package in the same way as it checks authorization to execute a package from
a requester other than DB2 UDB for z/OS. See Part 3 (Volume 1) of DB2
Administration Guide for more information about authorization checking for package
execution.
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 861
function
An 18-byte area containing TERMINATE THREAD followed by two blanks.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
“DSNRLI TERMINATE IDENTIFY function” on page 863 shows the syntax of the
TERMINATE IDENTIFY function.
If the application allocated a plan, and you issue TERMINATE IDENTIFY without
first issuing TERMINATE THREAD, DB2 deallocates the plan before terminating the
connection.
Issuing TERMINATE IDENTIFY is optional. If you do not, DB2 performs the same
functions when the task terminates.
If DB2 terminates, the application must issue TERMINATE IDENTIFY to reset the
RRSAF control blocks. This ensures that future connection requests from the task
are successful when DB2 restarts.
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 863
Table 153. Examples of RRSAF TERMINATE IDENTIFY calls (continued)
Language Call example
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in
your C, C++, and PL/I applications:
C #pragma linkage(dsnrli, OS)
C++ extern ″OS″ {
int DSNRLI(
char * functn,
...); }
PL/I DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
Issue TRANSLATE only after a successful IDENTIFY operation. For errors that
occur during SQL or IFI requests, the TRANSLATE function performs automatically.
Usage: Use TRANSLATE to get a corresponding SQL error code and message
text for the DB2 error reason codes that RRSAF returns in register 0 following a
CREATE THREAD service request. DB2 places this information in the SQLCODE
and SQLSTATE host variables or related fields of the SQLCA.
In these tables, the first column lists the most recent RRSAF or DB2 function
executed. The first row lists the next function executed. The contents of the
intersection of a row and column indicate the result of calling the function in the first
column followed by the function in the first row. For example, if you issue
TERMINATE THREAD, then you execute SQL or issue an IFI call, RRSAF returns
reason code X'00C12219'.
Table 155 summarizes RRSAF behavior when the next call is the IDENTIFY,
SWITCH TO, SIGNON, or CREATE THREAD function.
| Table 155. Effect of call order when next call is IDENTIFY, SWITCH TO, SIGNON, or CREATE THREAD
| Next function
| SIGNON, AUTH SIGNON,
| Previous function IDENTIFY SWITCH TO or CONTEXT SIGNON CREATE THREAD
| Empty: first call IDENTIFY X'00C12205' X'00C12204' X'00C12204'
1
| IDENTIFY X'00F30049' Switch to ssnm Signon X'00C12217'
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 865
| Table 155. Effect of call order when next call is IDENTIFY, SWITCH TO, SIGNON, or CREATE THREAD (continued)
| Next function
| SIGNON, AUTH SIGNON,
| Previous function IDENTIFY SWITCH TO or CONTEXT SIGNON CREATE THREAD
1
| SWITCH TO IDENTIFY Switch to ssnm Signon CREATE THREAD
1
| SIGNON, AUTH SIGNON, X'00F30049' Switch to ssnm Signon CREATE THREAD
| or CONTEXT SIGNON
1
| CREATE THREAD X'00F30049' Switch to ssnm Signon X'00C12202'
1
| TERMINATE THREAD X'00C12201' Switch to ssnm Signon CREATE THREAD
1
| IFI X'00F30049' Switch to ssnm Signon X'00C12202'
2
| SQL X'00F30049' Switch to ssnm X'00F30092' X'00C12202'
1
| SRRCMIT or SRRBACK X'00F30049' Switch to ssnm Signon X'00C12202'
| Notes:
| 1. Signon means the signon to DB2 through either SIGNON, AUTH SIGNON, or CONTEXT SIGNON.
| 2. SIGNON, AUTH SIGNON, or CONTEXT SIGNON are not allowed if any SQL operations are requested after
| CREATE THREAD or after the last SRRCMIT or SRRBACK request.
|
Table 156 summarizes RRSAF behavior when the next call is the SQL or IFI,
TERMINATE THREAD, TERMINATE IDENTIFY, or TRANSLATE function.
| Table 156. Effect of call order when next call is SQL or IFI, TERMINATE THREAD, TERMINATE IDENTIFY, or
| TRANSLATE
| Next function
| Previous function SQL or IFI TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
3
| Empty: first call SQL or IFI call X'00C12204' X'00C12204' X'00C12204'
3
| IDENTIFY SQL or IFI call X'00C12203' TERMINATE IDENTIFY TRANSLATE
3
| SWITCH TO SQL or IFI call TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
3
| SIGNON, AUTH SIGNON, SQL or IFI call TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
| or CONTEXT SIGNON
3
| CREATE THREAD SQL or IFI call TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
3
| TERMINATE THREAD SQL or IFI call X'00C12203' TERMINATE IDENTIFY TRANSLATE
3
| IFI SQL or IFI call TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
3 1 2
| SQL SQL or IFI call X'00F30093' X'00F30093' TRANSLATE
3
| SRRCMIT or SRRBACK SQL or IFI call TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
| Notes:
| 1. TERMINATE THREAD is not allowed if any SQL operations are requested after CREATE THREAD or after the last
| SRRCMIT or SRRBACK request.
| 2. TERMINATE IDENTIFY is not allowed if any SQL operations are requested after CREATE THREAD or after the
| last SRRCMIT or SRRBACK request.
| 3. Using implicit connect with SQL or IFI calls causes RRSAF to issue an implicit IDENTIFY and CREATE THREAD.
| If you continue with explicit RRSAF statements after an implicit connect, you must follow the standard order of
| explicit RRSAF calls. Implicit connect does not issue a SIGNON. Therefore, you might need to issue an explicit
| SIGNON to satisfy the standard order requirement. For example, an SQL statement followed by an explicit
| TERMINATE THREAD requires an explicit SIGNON before issuing the TERMINATE THREAD.
|
A single task
This example shows a single task running in an address space. z/OS RRS controls
commit processing when the task terminates normally.
IDENTIFY
SIGNON
CREATE THREAD
SQL
. or IFI
.
.
TERMINATE IDENTIFY
Multiple tasks
This example shows multiple tasks in an address space. Task 1 executes no SQL
statements and makes no IFI calls. Its purpose is to monitor DB2 termination and
startup ECBs and to check the DB2 release level.
TASK 1 TASK 2 TASK 3 TASK n
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 867
When the SQL operations complete, both tasks perform RRS context switch
operations. Those operations disconnect each DB2 thread from the task under
which it was running.
v Task 1 then creates context c, identifies to the subsystem, performs a context
switch to make context c active for task 1, then allocates a thread for user C and
performs SQL operations for user C.
Task 2 does the same for user D.
When the SQL operations for user C complete, task 1 performs a context switch
operation to:
– Switch the thread for user C away from task 1.
– Switch the thread for user B to task 1.
For a context switch operation to associate a task with a DB2 thread, the DB2
thread must have previously performed an identify operation. Therefore, before
the thread for user B can be associated with task 1, task 1 must have performed
an identify operation.
v Task 2 performs two context switch operations to:
– Disassociate the thread for user D from task 2.
– Associate the thread for user A with task 2.
Task 1 Task 2
When the reason code begins with X'00F3' (except for X'00F30006'), you can use
the RRSAF TRANSLATE function to obtain error message text that can be printed
and displayed.
For SQL calls, RRSAF returns standard SQL return codes in the SQLCA. See Part
1 of DB2 Messages and Codes for a list of those return codes and their meanings.
RRSAF returns IFI return codes and reason codes in the instrumentation facility
Program examples
This section contains sample JCL for running an RRSAF application and assembler
code for accessing RRSAF.
//SYSPRINT DD SYSOUT=*
//DSNRRSAF DD DUMMY
//SYSUDUMP DD SYSOUT=*
Delete the loaded modules when the application no longer needs to access DB2.
****************************** GET LANGUAGE INTERFACE ENTRY ADDRESSES
LOAD EP=DSNRLI Load the RRSAF service request EP
ST R0,LIRLI Save this for RRSAF service requests
LOAD EP=DSNHLIR Load the RRSAF SQL call Entry Point
ST R0,LISQL Save this for SQL calls
* .
* . Insert connection service requests and SQL calls here
* .
DELETE EP=DSNRLI Correctly maintain use count
DELETE EP=DSNHLIR Correctly maintain use count
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 869
application that calls this intermediate subroutine uses 24-bit addressing, the
intermediate subroutine must account for the difference.
In the example that follows, LISQL is addressable because the calling CSECT used
the same register 12 as CSECT DSNHLI. Your application must also establish
addressability to LISQL.
***********************************************************************
* Subroutine DSNHLI intercepts calls to LI EP=DSNHLI
***********************************************************************
DS 0D
DSNHLI CSECT Begin CSECT
STM R14,R12,12(R13) Prologue
LA R15,SAVEHLI Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
L R15,LISQL Get the address of real DSNHLI
BASSM R14,R15 Branch to DSNRLI to do an SQL call
* DSNRLI is in 31-bit mode, so use
* BASSM to assure that the addressing
* mode is preserved.
L R13,4(,R13) Restore R13 (caller’s save area addr)
L R14,12(,R13) Restore R14 (return address)
RETURN (1,12) Restore R1-12, NOT R0 and R15 (codes)
The code in Figure 227 does not show a task that waits on the DB2 termination
ECB. You can code such a task and use the z/OS WAIT macro to monitor the ECB.
The task that waits on the termination ECB should detach the sample code if the
termination ECB is posted. That task can also wait on the DB2 startup ECB. The
task in Figure 227 waits on the startup ECB at its own task level.
Figure 228 on page 872 shows declarations for some of the variables that are used
in Figure 227.
Chapter 31. Programming for the Resource Recovery Services attachment facility (RRSAF) 871
****************** VARIABLES SET BY APPLICATION ***********************
LIRLI DS F DSNRLI entry point address
LISQL DS F DSNHLIR entry point address
SSNM DS CL4 DB2 subsystem name for IDENTIFY
CORRID DS CL12 Correlation ID for SIGNON
ACCTTKN DS CL22 Accounting token for SIGNON
ACCTINT DS CL6 Accounting interval for SIGNON
PLAN DS CL8 DB2 plan name for CREATE THREAD
COLLID DS CL18 Collection ID for CREATE THREAD. If
* PLAN contains a plan name, not used.
REUSE DS CL8 Controls SIGNON after CREATE THREAD
CONTROL DS CL8 Action that application takes based
* on return code from RRSAF
****************** VARIABLES SET BY DB2 *******************************
STARTECB DS F DB2 startup ECB
TERMECB DS F DB2 termination ECB
EIBPTR DS F Address of environment info block
RIBPTR DS F Address of release info block
****************************** CONSTANTS ******************************
CONTINUE DC CL8’CONTINUE’ CONTROL value: Everything OK
IDFYFN DC CL18’IDENTIFY ’ Name of RRSAF service
SGNONFN DC CL18’SIGNON ’ Name of RRSAF service
CRTHRDFN DC CL18’CREATE THREAD ’ Name of RRSAF service
TRMTHDFN DC CL18’TERMINATE THREAD ’ Name of RRSAF service
TMIDFYFN DC CL18’TERMINATE IDENTIFY’ Name of RRSAF service
****************************** SQLCA and RIB **************************
EXEC SQL INCLUDE SQLCA
DSNDRIB Map the DB2 Release Information Block
******************* Parameter list for RRSAF calls ********************
RRSAFCLL CALL ,(*,*,*,*,*,*,*,*),VL,MF=L
Figure 228. Declarations for variables used in the RRSAF connection routine
| In addition, you can start and stop the CICS attachment facility from within an
| application program by using the system programming interface SET DB2CONN.
| For more information, see the CICS Transaction Server for z/OS System
| Programming Reference.
One of the most important things you can do to maximize thread reuse is to close
all cursors that you declared WITH HOLD before each sync point, because DB2
does not automatically close them. A thread for an application that contains an open
cursor cannot be reused. It is a good programming practice to close all cursors
immediately after you finish using them. For more information about the effects of
declaring cursors WITH HOLD in CICS applications, see “Held and non-held
cursors” on page 112.
Attention
The stormdrain effect is a condition that occurs when a system continues to
receive work, even though that system is down.
When both of the following conditions are true, the stormdrain effect can
occur:
v The CICS attachment facility is down.
v You are using INQUIRE EXITPROGRAM to avoid AEY9 abends.
For more information on the stormdrain effect and how to avoid it, see Chapter
2 of DB2 Data Sharing: Planning and Administration.
| If the CICS attachment facility is started and you are using standby mode, you do
| not need to test whether the CICS attachment facility is up before executing SQL.
| When an SQL statement is executed, and the CICS attachment facility is in standby
| mode, the attachment issues SQLCODE -923 with a reason code that indicates that
| DB2 is not available. See CICS DB2 Guide for information about the
| STANDBYMODE and CONNECTERROR parameters, and DB2 Messages and
| Codes for an explanation of SQLCODE -923.
| The Application Messaging Interface (AMI) is a commonly used API for WebSphere
| MQ that is available in a number of high-level languages. In addition to the AMI,
| DB2 provides its own application programming interface to the WebSphere MQ
| message handling system through a set of external user-defined functions. Using
| these functions in SQL statements allows you to combine DB2 database access
| with WebSphere MQ message handling.
| Messages
| WebSphere MQ uses messages to pass information between applications.
| Messages consist of the following parts:
| v The message attributes, which identify the message and its properties. The AMI
| uses the attributes and the policy to interpret and construct MQ headers and
| message descriptors.
| v The message data, which is the application data that is carried in the message.
| The AMI does not act on this data.
| Services
| A service describes a destination to which an application sends messages or from
| which an application receives messages. For WebSphere MQ, a destination is
| called a message queue, and a queue resides in a queue manager.
| Applications can put messages on queues or get messages from them by using the
| AMI. A system administrator sets up the parameters for managing a queue, which
| are defined in the service. Therefore, the complexity is hidden from the application
| programmer. An application program selects a service by specifying it as a
| parameter for WebSphere MQ function calls.
| Policies
| A policy controls how the AMI functions operate to handle messages. Policies
| control such items as:
| v The attributes of the message, for example, the priority
| v Options for send and receive operations, for example, whether an operation is
| part of a unit of work
| The AMI provides default policies. Alternatively, a system administrator can define
| customized policies and store them in a repository. An application program can
| specify a policy as a parameter for WebSphere MQ function calls.
|
| Capabilities of WebSphere MQ functions
| The WebSphere MQ functions support the following types of operations:
| v Send and forget, where no reply is needed.
| v Read or receive, where one or all messages are either read without removing
| them from the queue, or received and removed from the queue.
| v Request and response, where a sending application needs a response to a
| request.
| v Publish and subscribe, where messages are assigned to specific publisher
| services and are sent to queues. Applications that subscribe to the corresponding
| subscriber service can monitor specific messages.
| You can use the WebSphere MQ functions to send messages to a message queue
| or to receive messages from the message queue. You can send a request to a
| message queue and receive a response, and you can also publish messages to the
| WebSphere MQ publisher and subscribe to messages that have been published
| with specific topics.
| The WebSphere MQ server is located on the same z/OS system as the DB2
| database server. The MQ functions are registered with the DB2 database server
| and provide access to the WebSphere MQ server by using the AMI. For information
| about installing the WebSphere MQ functions for DB2, see Part 2 of DB2
| Installation Guide.
| The WebSphere MQ functions for DB2 include both scalar functions and table
| functions. Table 158 on page 877 describes the DB2 MQ scalar functions.
| Notes:
| 1. You can send or receive messages in VARCHAR variables or CLOB variables. The maximum length for a
| message in a VARCHAR variable is 4000 bytes. The maximum length for a message in a CLOB variable is 1 MB.
|
| Table 159 on page 878 describes the MQ table functions that DB2 can use.
| Notes:
| 1. You can send or receive messages in VARCHAR variables or CLOB variables. The maximum length for a
| message in a VARCHAR variable is 4000 bytes. The maximum length for a message in a CLOB variable is 1 MB.
| 2. The first column of the result table of a DB2 MQ table function contains the message. For a description of the
| other columns, see DB2 SQL Reference.
|
|
| Commit environment for WebSphere MQ functions
| DB2 provides two versions of commit when you use DB2 MQ functions:
| v A single-phase commit: the schema name when you use functions for this
| version is DB2MQ1C.
| v A two-phase commit: the schema name when you use functions for this version
| is DB2MQ2C.
| You need to assign these two versions to different WLM environments, which
| guarantees that the versions are never invoked from the same address space.
| Single-phase commit
| If your application uses single-phase commit, any DB2 COMMIT or ROLLBACK
| operations are independent of WebSphere MQ operations. If a transaction is rolled
| back, the messages that have been sent to a queue within the current unit of work
| are not discarded.
| Two-phase commit
| If your application uses two-phase commit, RRS coordinates the commit process. If
| a transaction is rolled back, the messages that have been sent to a queue within
| the current unit of work are discarded.
| Basic messaging
| The most basic form of messaging with the DB2 MQ functions occurs when all
| database applications connect to the same DB2 database server. Clients can be
| local to the database server or distributed in a network environment.
| Sending messages
| When you use MQSEND, you choose what data to send, where to send it, and
| when to send it. This type of messaging is called send and forget; the sender only
| sends a message, relying on WebSphere MQ to ensure that the message reaches
| its destination.
| The following examples use the DB2MQ2C schema for two-phase commit, with the
| default service DB2.DEFAULT.SERVICE and the default policy
| DB2.DEFAULT.POLICY. For more information about two-phase commit, see
| “Commit environment for WebSphere MQ functions” on page 878.
| Example: The following SQL SELECT statement sends a message that consists of
| the string ″Testing msg″:
| SELECT DB2MQ2C.MQSEND (’Testing msg’)
| FROM SYSIBM.SYSDUMMY1;
| COMMIT;
| When you use single-phase commit, you do not need to use a COMMIT statement.
| For example:
| SELECT DB2MQ1C.MQSEND (’Testing msg’)
| FROM SYSIBM.SYSDUMMY1;
| Retrieving messages
| The DB2 MQ functions allow messages to be either read or received. The
| difference between reading and receiving is that reading returns the message at the
| head of a queue without removing it from the queue, whereas receiving causes the
| message to be removed from the queue. A message that is retrieved using a
| receive operation can be retrieved only once, whereas a message that is retrieved
| using a read operation allows the same message to be retrieved many times.
| The following examples use the DB2MQ2C schema for two-phase commit, with the
| default service DB2.DEFAULT.SERVICE and the default policy
| DB2.DEFAULT.POLICY. For more information about two-phase commit, see
| “Commit environment for WebSphere MQ functions” on page 878.
| Example: The following SQL SELECT statement reads the message at the head of
| the queue that is specified by the default service and policy:
| SELECT DB2MQ2C.MQREAD()
| FROM SYSIBM.SYSDUMMY1;
| Example: The following SQL SELECT statement causes the contents of a queue to
| be materialized as a DB2 table:
| SELECT T.*
| FROM TABLE(DB2MQ2C.MQREADALL()) T;
| The result table T of the table function consists of all the messages in the queue,
| which is defined by the default service, and the metadata about those messages.
| The first column of the materialized result table is the message itself, and the
| remaining columns contain the metadata. The SELECT statement returns both the
| messages and the metadata.
| The result table T of the table function consists of all the messages in the queue,
| which is defined by the default service, and the metadata about those messages.
| This SELECT statement returns only the messages.
| The result table T of the table function consists of all the messages in the default
| service queue and the metadata about those messages. The SELECT statement
| returns only the messages. The INSERT statement stores the messages into a
| table in your database.
| Application-to-application connectivity
| Application-to-application connectivity is typically used to solve the problem of
| putting together a diverse set of application subsystems. To facilitate application
| integration, WebSphere MQ provides the means to interconnect applications. This
| section describes two common scenarios:
| v Request-and-reply communication method
| v Publish-and-subscribe method
| The following examples use the DB2MQ1C schema for single-phase commit. For
| more information about single-phase commit, see “Commit environment for
| WebSphere MQ functions” on page 878.
| Example: The following SQL SELECT statement receives the first message that
| matches the identifier CORRID1 from the queue that is specified by the service
| MYSERVICE, using the policy MYPOLICY:
| SELECT DB2MQ1C.MQRECEIVE (’MYSERVICE’, ’MYPOLICY’, ’CORRID1’)
| FROM SYSIBM.SYSDUMMY1;
| Publish-and-subscribe method
| Another common method of application integration is for one application to notify
| other applications about events of interest. An application can do this by sending a
| message to a queue that is monitored by other applications. The message can
| contain a user-defined string or can be composed from database columns.
| Simple data publication: In many cases, only a simple message needs to be sent
| using the MQSEND function. When a message needs to be sent to multiple
| recipients concurrently, the distribution list facility of the MQSeries® AMI can be
| used.
| You define distribution lists by using the AMI administration tool. A distribution list
| comprises a list of individual services. A message that is sent to a distribution list is
| forwarded to every service defined within the list. Publishing messages to a
| distribution list is especially useful when there are multiple services that are
| interested in every message.
| Example: The following example shows how to send a message to the distribution
| list ″InterestedParties″:
| SELECT DB2MQ2C.MQSEND (’InterestedParties’,’Information of general interest’)
| FROM SYSIBM.SYSDUMMY1;
| When you require more control over the messages that a particular service should
| receive, you can use the MQPUBLISH function, in conjunction with the WebSphere
| MQSeries Integrator facility. This facility provides a publish-and-subscribe system,
| which provides a scalable, secure environment in which many subscribers can
| register to receive messages from multiple publishers. Subscribers are defined by
| queues, which are represented by service names.
| MQPUBLISH allows you to specify a list of topics that are associated with a
| message. Topics allow subscribers to more clearly specify the messages they
| receive. The following sequence illustrates how the publish-and-subscribe
| capabilities are used:
| 1. An MQSeries administrator configures the publish-and-subscribe capability of
| the WebSphere MQSeries Integrator facility.
| Example: To publish the last name, first name, department, and age of employees
| who are in department 5LGA, using all the defaults and a topic of EMP, you can
| use the following statement:
| SELECT DB2MQ2C.MQPUBLISH (LASTNAME || ’ ’ || FIRSTNAME || ’ ’ ||
| DEPARTMENT || ’ ’ || char(AGE), 'EMP')
| FROM DSN8810.EMP
| WHERE DEPARTMENT = ’5LGA’;
| Example: The following statement publishes messages that contain only the last
| name of employees who are in department 5LGA to the HR_INFO_PUB publisher
| service using the SPECIAL_POLICY service policy:
| SELECT DB2MQ2C.MQPUBLISH (’HR_INFO_PUB’, ’SPECIAL_POLICY’, LASTNAME,
| ’ALL_EMP:5LGA’, ’MANAGER’)
| FROM DSN8810.EMP
| WHERE DEPARTMENT = ’5LGA’;
| The messages indicate that the sender has the MANAGER correlation id. The topic
| string demonstrates that multiple topics, concatenated using a ’:’ (a colon) can be
| specified. In this example, the use of two topics allows subscribers of both the
| ALL_EMP and the 5LGA topics to receive these messages.
| To receive published messages, you must first register your application’s interest in
| messages of a given topic and indicate the name of the subscriber service to which
| messages are sent. An AMI subscriber service defines a broker service and a
| receiver service. The broker service is how the subscriber communicates with the
| publish-and-subscribe broker. The receiver service is the location where messages
| that match the subscription request are sent.
| Example: The following statement subscribes to the topic ALL_EMP and indicates
| that messages be sent to the subscriber service, ″aSubscriber″:
| SELECT DB2MQ2C.MQSUBSCRIBE (’aSubscriber’,’ALL_EMP’)
| FROM SYSIBM.SYSDUMMY1;
| To display both the messages and the topics with which they are published, you
| can use one of the table functions.
| Example: The following statement receives the first five messages from
| ″aSubscriberReceiver″ and display both the message and the topic for each of the
| five messages:
| SELECT t.msg, t.topic
| FROM table (DB2MQ2C.MQRECEIVEALL (’aSubscriberReceiver’,5)) t;
| Example: To read all of the messages with the topic ALL_EMP, issue the following
| statement:
| SELECT t.msg
| FROM table (DB2MQ2C.MQREADALL (’aSubscriberReceiver’)) t
| WHERE t.topic = ’ALL_EMP’;
| Note: If you use MQRECEIVEALL with a constraint, your application receives the
| entire queue, not just those messages that are published with the topic ALL_EMP.
| This is because the table function is performed before the constraint is applied.
| Example: The following statement unsubscribes from the ALL_EMP topic of the
| ″aSubscriber″ subscriber service:
| SELECT DB2MQ2C.MQUNSUBSCRIBE (’aSubscriber’, ’ALL_EMP’)
| FROM SYSIBM.SYSDUMMY1;
| After you issue the preceding statement, the publish-and-subscribe broker no longer
| delivers messages that match the ALL_EMP topic to the ″aSubscriber″ subscriber
| service.
| Example: The following example shows how you can use the MQSeries functions
| of DB2 UDB for z/OS with a trigger to publish a message each time a new
| employee is hired:
| CREATE TRIGGER new_employee AFTER INSERT ON DSN8810.EMP
| REFERENCING NEW AS n
| FOR EACH ROW MODE DB2SQL
| SELECT DB2MQ2C.MQPUBLISH (’HR_INFO_PUB’, current date || ’ ’ ||
| LASTNAME || ’ ’ || DEPARTMENT, ’NEW_EMP’);
Answer: Add a column with the data type ROWID or an identity column. ROWID
columns and identity columns contain a unique value for each row in the table. You
can define the column as GENERATED ALWAYS, which means that you cannot
insert values into the column, or GENERATED BY DEFAULT, which means that
DB2 generates a value if you do not specify one. If you define the ROWID or
identity column as GENERATED BY DEFAULT, you need to define a unique index
that includes only that column to guarantee uniqueness.
| For more information about using DB2-generated values as unique keys, see
| Chapter 11, “Using DB2-generated values as keys,” on page 253.
Answer: Declare your cursor as scrollable. When you select rows from the table,
you can use the various forms of the FETCH statement to move to an absolute row
number, move ahead or back a certain number of rows, to the first or last row,
before the first row or after the last row, forward, or backward. You can use any
combination of these FETCH statements to change direction repeatedly.
You can use code like the following example to move forward in the department
table by 10 records, backward five records, and forward again by three records:
/**************************/
/* Declare host variables */
/**************************/
EXEC SQL BEGIN DECLARE SECTION;
char[37] hv_deptname;
EXEC SQL END DECLARE SECTION;
/**********************************************************/
/* Declare scrollable cursor to retrieve department names */
/**********************************************************/
EXEC SQL DECLARE C1 SCROLL CURSOR FOR
. SELECT DEPTNAME FROM DSN8810.DEPT;
.
.
/**********************************************************/
/* Open the cursor and position it before the start of */
/* the result table. */
/**********************************************************/
EXEC SQL OPEN C1;
EXEC SQL FETCH BEFORE FROM C1;
/**********************************************************/
/* Fetch first 10 rows */
/**********************************************************/
for(i=0;i<10;i++)
{
EXEC SQL FETCH NEXT FROM C1 INTO :hv_deptname;
}
/**********************************************************/
/* Save the value in the tenth row */
/**********************************************************/
tenth_row=hv_deptname;
/**********************************************************/
Answer: On the SELECT statement, use the FOR UPDATE clause without a
column list, or the FOR UPDATE OF clause with a column list. For a more efficient
program, specify a column list with only those columns that you intend to update.
Then use the positioned UPDATE statement. The clause WHERE CURRENT OF
identifies the cursor that points to the row you want to update.
Answer: Use a scrollable cursor that is declared with the FOR UPDATE clause.
Using a scrollable cursor to update backward involves these basic steps:
1. Declare the cursor with the SENSITIVE STATIC SCROLL parameters.
2. Open the cursor.
3. Execute a FETCH statement to position the cursor at the end of the result table.
4. FETCH statements that move the cursor backward, until you reach the row that
you want to update.
5. Execute the UPDATE WHERE CURRENT OF statement to update the current
row.
6. Repeat steps 4 and 5 until you have update all the rows that you need to.
7. When you have retrieved and updated all the data, close the cursor.
Answer: There are no special techniques; but for large numbers of rows, efficiency
can become very important. In particular, you need to be aware of locking
considerations, including the possibilities of lock escalation.
If your program allows input from a terminal before it commits the data and thereby
releases locks, it is possible that a significant loss of concurrency results. Review
the description of locks in “The ISOLATION option” on page 394 while designing
your program. Then review the expected use of tables to predict whether you could
have locking problems.
Using SELECT *
Question: What are the implications of using SELECT * ?
Answer: Generally, you should select only the columns you need because DB2 is
sensitive to the number of columns selected. Use SELECT * only when you are
sure you want to select all columns. One alternative is to use views defined with
only the necessary columns, and use SELECT * to access the views. Avoid
SELECT * if all the selected columns participate in a sort operation (SELECT
DISTINCT and SELECT...UNION, for example).
DB2 usually optimizes queries to retrieve all rows that qualify. But sometimes you
want to retrieve only the first few rows. For example, to retrieve the first row that is
greater than or equal to a known value, code:
SELECT column list FROM table
WHERE key >= value
ORDER BY key ASC
Even with the ORDER BY clause, DB2 might fetch all the data first and sort it
afterwards, which could be wasteful. Instead, you can write the query in one of the
following ways:
SELECT * FROM table
WHERE key >= value
ORDER BY key ASC
OPTIMIZE FOR 1 ROW
SELECT * FROM table
WHERE key >= value
ORDER BY key ASC
FETCH FIRST n ROWS ONLY
Use FETCH FIRST n ROWS ONLY to limit the number of rows in the result table to
n rows. FETCH FIRST n ROWS ONLY has the following benefits:
v When you use FETCH statements to retrieve data from a result table, FETCH
FIRST n ROWS ONLY causes DB2 to retrieve only the number of rows that you
need. This can have performance benefits, especially in distributed applications.
If you try to execute a FETCH statement to retrieve the n+1st row, DB2 returns a
+100 SQLCODE.
v When you use FETCH FIRST ROW ONLY in a SELECT INTO statement, you
never retrieve more than one row. Using FETCH FIRST ROW ONLY in a
SELECT INTO statement can prevent SQL errors that are caused by
inadvertently selecting more than one value into a host variable.
When you specify FETCH FIRST n ROWS ONLY but not OPTIMIZE FOR n ROWS,
OPTIMIZE FOR n ROWS is implied. When you specify FETCH FIRST n ROWS
ONLY and OPTIMIZE FOR m ROWS, and m is less than n, DB2 optimizes the
query for m rows. If m is greater than n, DB2 optimizes the query for n rows.
To get the effect of adding data to the “end” of a table, define a unique index on a
TIMESTAMP column in the table definition. Then, when you retrieve data from the
table, use an ORDER BY clause naming that column. The newest insert appears
last.
Answer: You can save the corresponding SQL statements in a table with a column
having a data type of VARCHAR(n), where n is the maximum length of any SQL
statement. You must save the source SQL statements, not the prepared versions.
That means that you must retrieve and then prepare each statement before
executing the version stored in the table. In essence, your program prepares an
SQL statement from a character string and executes it dynamically. (For a
description of dynamic SQL, see Chapter 24, “Coding dynamic SQL in application
programs,” on page 535.)
You cannot rearrange or delete columns in a table without dropping the entire table.
You can, however, create a view on the table, which includes only the columns you
want, in the order you want. This has the same effect as redefining the table.
For a description of dynamic SQL execution, see Chapter 24, “Coding dynamic SQL
in application programs,” on page 535.
Answer: You can store the data in a table in a VARCHAR column or a LOB
column.
Answer: When you receive an SQL error because of a constraint violation, print out
the SQLCA. You can use the DSNTIAR routine described in “Calling DSNTIAR to
display SQLCA fields” on page 89 to format the SQLCA for you. Check the SQL
error message insertion text (SQLERRM) for the name of the constraint. For
information on possible violations, see SQLCODEs -530 through -548 in Part 1 of
DB2 Messages and Codes.
Authorization on all sample objects is given to PUBLIC in order to make the sample
programs easier to run. The contents of any table can easily be reviewed by
executing an SQL statement, for example SELECT * FROM DSN8810.PROJ. For
convenience in interpreting the examples, the department and employee tables are
listed here in full.
The activity table is a parent table of the project activity table, through a foreign key
on column ACTNO.
The table, shown in Table 164 on page 899, resides in table space
DSN8D81A.DSN8S81D and is created with:
CREATE TABLE DSN8810.DEPT
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) NOT NULL,
LOCATION CHAR(16) ,
PRIMARY KEY (DEPTNO) )
IN DSN8D81A.DSN8S81D
CCSID EBCDIC;
The LOCATION column contains nulls until sample job DSNTEJ6 updates this
column with the location name.
It is a dependent of the employee table, through its foreign key on column MGRNO.
The table shown in Table 167 on page 900 and Table 168 on page 901 resides in
the partitioned table space DSN8D81A.DSN8S81E. Because it has a foreign key
referencing DEPT, that table and the index on its primary key must be created first.
Then EMP is created with:
CREATE TABLE DSN8810.EMP
(EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1) NOT NULL,
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) CONSTRAINT NUMBER CHECK
(PHONENO >= ’0000’ AND
PHONENO <= ’9999’) ,
HIREDATE DATE ,
JOB CHAR(8) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
Table 165 shows the content of the columns. The table has a check constraint,
NUMBER, which checks that the phone number is in the numeric range 0000 to
9999.
Table 165. Columns of the employee table
Column Column Name Description
1 EMPNO Employee number (the primary key)
2 FIRSTNME First name of employee
3 MIDINIT Middle initial of employee
4 LASTNAME Last name of employee
5 WORKDEPT ID of department in which the employee works
6 PHONENO Employee telephone number
7 HIREDATE Date of hire
8 JOB Job held by the employee
9 EDLEVEL Number of years of formal education
10 SEX Sex of the employee (M or F)
11 BIRTHDATE Date of birth
12 SALARY Yearly salary in dollars
13 BONUS Yearly bonus in dollars
14 COMM Yearly commission in dollars
Table 167 and Table 168 on page 901 show the content of the employee table:
Table 167. Left half of DSN8810.EMP: employee table. Note that a blank in the MIDINIT column is an actual value of
″ ″ rather than null.
EMPNO FIRSTNME MIDINIT LASTNAME WORKDEPT PHONENO HIREDATE
Table 170 shows the indexes for the employee photo and resume table:
Table 170. Indexes of the employee photo and resume table
Name On Column Type of Index
DSN8810.XEMP_PHOTO_RESUME EMPNO Primary, ascending
Table 171 shows the indexes for the auxiliary tables for the employee photo and
resume table:
Table 171. Indexes of the auxiliary tables for the employee photo and resume table
Name On Table Type of Index
DSN8810.XAUX_BMP_PHOTO DSN8810.AUX_BMP_PHOTO Unique
DSN8810.XAUX_PSEG_PHOTO DSN8810.AUX_PSEG_PHOTO Unique
DSN8810.XAUX_EMP_RESUME DSN8810.AUX_EMP_RESUME Unique
The table is a parent table of the project table, through a foreign key on column
RESPEMP.
The table resides in database DSN8D81A. Because it has foreign keys referencing
DEPT and EMP, those tables and the indexes on their primary keys must be
created first. Then PROJ is created with:
CREATE TABLE DSN8810.PROJ
(PROJNO CHAR(6) PRIMARY KEY NOT NULL,
PROJNAME VARCHAR(24) NOT NULL WITH DEFAULT
’PROJECT NAME UNDEFINED’,
DEPTNO CHAR(3) NOT NULL REFERENCES
DSN8810.DEPT ON DELETE RESTRICT,
RESPEMP CHAR(6) NOT NULL REFERENCES
DSN8810.EMP ON DELETE RESTRICT,
PRSTAFF DECIMAL(5, 2) ,
PRSTDATE DATE ,
PRENDATE DATE ,
MAJPROJ CHAR(6))
IN DSN8D81A.DSN8S81P
CCSID EBCDIC;
Because the table is self-referencing, the foreign key for that restraint must be
added later with:
ALTER TABLE DSN8810.PROJ
FOREIGN KEY RPP (MAJPROJ) REFERENCES DSN8810.PROJ
ON DELETE CASCADE;
The table is a parent table of the employee to project activity table, through a
foreign key on columns PROJNO, ACTNO, and EMSTDATE. It is a dependent of:
v The activity table, through its foreign key on column ACTNO
v The project table, through its foreign key on column PROJNO
The table resides in database DSN8D81A. Because it has foreign keys referencing
EMP and PROJACT, those tables and the indexes on their primary keys must be
created first. Then EMPPROJACT is created with:
CREATE TABLE DSN8810.EMPPROJACT
(EMPNO CHAR(6) NOT NULL,
PROJNO CHAR(6) NOT NULL,
ACTNO SMALLINT NOT NULL,
EMPTIME DECIMAL(5,2) ,
EMSTDATE DATE ,
EMENDATE DATE ,
FOREIGN KEY REPAPA (PROJNO, ACTNO, EMSTDATE)
REFERENCES DSN8810.PROJACT
ON DELETE RESTRICT,
FOREIGN KEY REPAE (EMPNO) REFERENCES DSN8810.EMP
ON DELETE RESTRICT)
IN DSN8D81A.DSN8S81P
CCSID EBCDIC;
Table 177 shows the indexes for the employee to project activity table:
Table 177. Indexes of the employee to project activity table
Name On Columns Type of Index
DSN8810.XEMPPROJACT1 PROJNO, ACTNO, Unique, ascending
EMSTDATE, EMPNO
DSN8810.XEMPPROJACT2 EMPNO Ascending
SET SET
NULL NULL
RESTRICT EMP
RESTRICT
RESTRICT EMP_PHOTO_RESUME
RESTRICT
CASCADE ACT
PROJ RESTRICT
RESTRICT
PROJACT
RESTRICT
RESTRICT
EMPPROJACT
The following SQL statements are used to create the sample views:
CREATE VIEW DSN8810.VDEPT
AS SELECT ALL DEPTNO ,
DEPTNAME,
MGRNO ,
ADMRDEPT
FROM DSN8810.DEPT;
CREATE VIEW DSN8810.VHDEPT
AS SELECT ALL DEPTNO ,
DEPTNAME,
MGRNO ,
ADMRDEPT,
LOCATION
FROM DSN8810.DEPT;
CREATE VIEW DSN8810.VEMP
AS SELECT ALL EMPNO ,
FIRSTNME,
MIDINIT ,
LASTNAME,
WORKDEPT
FROM DSN8810.EMP;
CREATE VIEW DSN8810.VPROJ
AS SELECT ALL
PROJNO, PROJNAME, DEPTNO, RESPEMP, PRSTAFF,
PRSTDATE, PRENDATE, MAJPROJ
FROM DSN8810.PROJ ;
CREATE VIEW DSN8810.VACT
AS SELECT ALL ACTNO ,
ACTKWD ,
ACTDESC
FROM DSN8810.ACT ;
CREATE VIEW DSN8810.VPROJACT
AS SELECT ALL
PROJNO,ACTNO, ACSTAFF, ACSTDATE, ACENDATE
FROM DSN8810.PROJACT ;
CREATE VIEW DSN8810.VEMPPROJACT
AS SELECT ALL
EMPNO, PROJNO, ACTNO, EMPTIME, EMSTDATE, EMENDATE
FROM DSN8810.EMPPROJACT ;
Table
spaces: Separate
LOB spaces
spaces for DSN8SvrP
for employee
DSN8SvrD DSN8SvrE other common for
photo and
department employee application programming
resume table
table table tables tables
In addition to the storage group and databases shown in Figure 230, the storage
group DSN8G81U and database DSN8D81U are created when you run DSNTEJ2A.
Storage group
The default storage group, SYSDEFLT, created when DB2 is installed, is not used
to store sample application data. The storage group used to store sample
application data is defined by this statement:
CREATE STOGROUP DSN8G810
VOLUMES (DSNV01)
VCAT DSNC810;
Databases
The default database, created when DB2 is installed, is not used to store the
| sample application data. DSN8D81P is the database that is used for tables that are
| related to programs. The remainder of the databases are used for tables that are
| related to applications. They are defined by the following statements:
CREATE DATABASE DSN8D81A
STOGROUP DSN8G810
BUFFERPOOL BP0
CCSID EBCDIC;
Several sample applications come with DB2 to help you with DB2 programming
techniques and coding practices within each of the four environments: batch, TSO,
IMS, and CICS. The sample applications contain various applications that might
apply to managing to company.
You can examine the source code for the sample application programs in the online
sample library included with the DB2 product. The name of this sample library is
prefix.SDSNSAMP.
Phone application: The phone application lets you view or update individual
employee phone numbers. There are different versions of the application for
ISPF/TSO, CICS, IMS, and batch:
v ISPF/TSO applications use COBOL and PL/I.
v CICS and IMS applications use PL/I.
v Batch applications use C, C++, COBOL, FORTRAN, and PL/I.
LOB application: The LOB application demonstrates how to perform the following
tasks:
v Define DB2 objects to hold LOB data
v Populate DB2 tables with LOB data using the LOAD utility, or using INSERT and
UPDATE statements when the data is too large for use with the LOAD utility
v Manipulate the LOB data using LOB locators
Application programs: Tables 181 through 183 on pages 918 through 920 provide
the program names, JCL member names, and a brief description of some of the
programs included for each of the three environments: TSO, IMS, and CICS.
CICS
Table 183. Sample DB2 applications for CICS
Application Program name JCL member name Description
Organization DSN8CC0 DSNTEJ5C CICS COBOL
DSN8CC1 Organization
DSN8CC2 Application
Organization DSN8CP0 DSNTEJ5P CICS PL/I
DSN8CP1 Organization
DSN8CP2 Application
Project DSN8CP6 DSNTEJ5P CICS PL/I Project
DSN8CP7 Application
DSN8CP8
Phone DSN8CP3 DSNTEJ5P CICS PL/I Phone
Application. This
program lists
employee telephone
numbers and updates
them if requested.
Because these four programs also accept the static SQL statements CONNECT,
SET CONNECTION, and RELEASE, you can use the programs to access DB2
tables at remote locations.
DSNTIAUL and DSNTIAD are shipped only as source code, so you must
precompile, assemble, link, and bind them before you can use them. If you want to
| use the source code version of DSNTEP2 or DSNTEP4, you must precompile,
| compile, link, and bind it. You need to bind the object code version of DSNTEP2 or
| DSNTEP4 before you can use it. Usually a system administrator prepares the
programs as part of the installation process. Table 184 indicates which installation
job prepares each sample program. All installation jobs are in data set
DSN810.SDSNSAMP.
Table 184. Jobs that prepare DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4
Program name Program preparation job
DSNTIAUL DSNTEJ2A
DSNTIAD DSNTIJTM
DSNTEP2 (source) DSNTEJ1P
DSNTEP2 (object) DSNTEJ1L
DSNTEP4 (source) DSNTEJ1P
To run the sample programs, use the DSN RUN command, which is described in
detail in Chapter 2 of DB2 Command Reference. Table 185 lists the load module
name and plan name that you must specify, and the parameters that you can
specify when you run each program. See the following sections for the meaning of
each parameter.
Table 185. DSN RUN option values for DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4
Program name Load module Plan Parameters
DSNTIAUL DSNTIAUL DSNTIB81 SQL
number of rows per fetch
DSNTIAD DSNTIAD DSNTIA81 RC0
SQLTERM(termchar)
DSNTEP2 DSNTEP2 DSNTEP81 ALIGN(MID)
or ALIGN(LHS)
NOMIXED or MIXED
SQLTERM(termchar)
DSNTEP4 DSNTEP4 DSNTEP481 ALIGN(MID)
or ALIGN(LHS)
NOMIXED or MIXED
SQLTERM(termchar)
The remainder of this chapter contains the following information about running each
program:
v Descriptions of the input parameters
v Data sets that you must allocate before you run the program
v Return codes from the program
v Examples of invocation
See the sample jobs that are listed in Table 184 on page 921 for a working example
of each program.
Running DSNTIAUL
This section contains information that you need when you run DSNTIAUL, including
parameters, data sets, return codes, and invocation examples.
DSNTIAUL parameters:
SQL
Specify SQL to indicate that your input data set contains one or more complete
SQL statements, each of which ends with a semicolon. You can include any
SQL statement that can be executed dynamically in your input data set. In
addition, you can include the static SQL statements CONNECT, SET
CONNECTION, or RELEASE. DSNTIAUL uses the SELECT statements to
determine which tables to unload and dynamically executes all other statements
except CONNECT, SET CONNECTION, and RELEASE. DSNTIAUL executes
CONNECT, SET CONNECTION, and RELEASE statically to connect to remote
locations.
If you do not specify the SQL parameter, your input data set must contain one or
more single-line statements (without a semicolon) that use the following syntax:
table or view name [WHERE conditions] [ORDER BY columns]
Each input statement must be a valid SQL SELECT statement with the clause
SELECT * FROM omitted and with no ending semicolon. DSNTIAUL generates a
SELECT statement for each input statement by appending your input line to
SELECT * FROM, then uses the result to determine which tables to unload. For this
input format, the text for each table specification can be a maximum of 72 bytes
and must not span multiple lines.
You can use the input statements to specify SELECT statements that join two or
more tables or select specific columns from a table. If you specify columns, you
need to modify the LOAD statement that DSNTIAUL generates.
Define all data sets as sequential data sets. You can specify the record length and
block size of the SYSPUNCH and SYSRECnn data sets. The maximum record
length for the SYSPUNCH and SYSRECnn data sets is 32760 bytes.
Examples of DSNTIAUL invocation: Suppose that you want to unload the rows
for department D01 from the project table. Because you can fit the table
specification on one line, and you do not want to execute any non-SELECT
statements, you do not need the SQL parameter. Your invocation looks like the one
that is shown in Figure 231:
If you want to obtain the LOAD utility control statements for loading rows into a
table, but you do not want to unload the rows, you can set the data set names for
the SYSRECnn data sets to DUMMY. For example, to obtain the utility control
statements for loading rows into the department table, you invoke DSNTIAUL as
shown in Figure 232 on page 925:
Now suppose that you also want to use DSNTIAUL to do these things:
v Unload all rows from the project table
v Unload only rows from the employee table for employees in departments with
department numbers that begin with D, and order the unloaded rows by
employee number
v Lock both tables in share mode before you unload them
| v Retrieve 250 rows per fetch
| For these activities, you must specify the SQL parameter and specify the number of
| rows per fetch when you run DSNTIAUL. Your DSNTIAUL invocation is shown in
Figure 233:
DSNTIAD parameters:
RC0
If you specify this parameter, DSNTIAD ends with return code 0, even if the
program encounters SQL errors. If you do not specify RC0, DSNTIAD ends with
a return code that reflects the severity of the errors that occur. Without RC0,
DSNTIAD terminates if more than 10 SQL errors occur during a single
execution.
SQLTERM(termchar)
Specify this parameter to indicate the character that you use to end each SQL
statement. You can use any special character except one of those listed in
Table 187. SQLTERM(;) is the default.
Table 187. Invalid special characters for the SQL terminator
Name Character Hexadecimal representation
blank X'40'
comma , X'6B'
double quotation mark " X'7F'
left parenthesis ( X'4D'
right parenthesis ) X'5D'
single quotation mark ' X'7D'
underscore _ X'6D'
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons.
Example: Suppose that you specify the parameter SQLTERM(#) to indicate that
the character # is the statement terminator. Then a CREATE TRIGGER
statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Be careful to choose a character for the statement terminator that is not used
within the statement.
Be careful to choose a character for the statement terminator that is not used
within the statement.
If you want to change the SQL terminator within a series of SQL statements,
you can use the --#SET TERMINATOR control statement.
Example: Suppose that you have an existing set of SQL statements to which
you want to add a CREATE TRIGGER statement that has embedded
semicolons. You can use the default SQLTERM value, which is a semicolon, for
all of the existing SQL statements. Before you execute the CREATE TRIGGER
statement, include the --#SET TERMINATOR # control statement to change
the SQL terminator to the character #:
SELECT * FROM DEPT;
SELECT * FROM ACT;
SELECT * FROM EMPPROJACT;
SELECT * FROM PROJ;
SELECT * FROM PROJACT;
--#SET TERMINATOR #
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
See the following discussion of the SYSIN data set for more information about
the --#SET control statement.
Figure 235. DSNTEP2 invocation with the ALIGN(LHS) and MIXED parameters
Figure 236. DSNTEP4 invocation with the ALIGN(MID) and MIXED parameters and using the
MULT_FETCH control option
This example program does not support BLOB, CLOB, or DBCLOB data types.
The SET statement sets a pointer from the address of an area in the linkage
section or another pointer; the statement can also set the address of an area in the
linkage section. Figure 238 on page 934 provides these uses of the SET statement.
The SET statement does not permit the use of an address in the
WORKING-STORAGE section.
Storage allocation
COBOL does not provide a means to allocate main storage within a program. You
can achieve the same end by having an initial program which allocates the storage,
and then calls a second program that manipulates the pointer. (COBOL does not
permit you to directly manipulate the pointer because errors and abends are likely
to occur.)
The initial program is extremely simple. It includes a working storage section that
allocates the maximum amount of storage needed. This program then calls the
If you need to allocate parts of storage, the best method is to use indexes or
subscripts. You can use subscripts for arithmetic and comparison operations.
Example
Figure 237 shows an example of the initial program DSN8BCU1 that allocates the
storage and calls the second program DSN8BCU2 shown in Figure 238 on page
934. DSN8BCU2 then defines the passed storage areas in its linkage section and
includes the USING clause on its PROCEDURE DIVISION statement.
Defining the pointers, then redefining them as numeric, permits some manipulation
of the pointers that you cannot perform directly. For example, you cannot add the
column length to the record pointer, but you can add the column length to the
numeric value that redefines the pointer.
Figure 238. Called program that does pointer manipulation (Part 1 of 10)
Figure 238. Called program that does pointer manipulation (Part 2 of 10)
Figure 238. Called program that does pointer manipulation (Part 3 of 10)
Figure 238. Called program that does pointer manipulation (Part 4 of 10)
Figure 238. Called program that does pointer manipulation (Part 5 of 10)
Figure 238. Called program that does pointer manipulation (Part 6 of 10)
Figure 238. Called program that does pointer manipulation (Part 7 of 10)
Figure 238. Called program that does pointer manipulation (Part 8 of 10)
Figure 238. Called program that does pointer manipulation (Part 9 of 10)
Figure 238. Called program that does pointer manipulation (Part 10 of 10)
/**********************************************************************/
/* Descriptive name = Dynamic SQL sample using C language */
/* */
/* Function = To show examples of the use of dynamic and static */
/* SQL. */
/* */
/* Notes = This example assumes that the EMP and DEPT tables are */
/* defined. They need not be the same as the DB2 Sample */
/* tables. */
/* */
/* Module type = C program */
/* Processor = DB2 precompiler, C compiler */
/* Module size = see link edit */
/* Attributes = not reentrant or reusable */
/* */
/* Input = */
/* */
/* symbolic label/name = DEPT */
/* description = arbitrary table */
/* symbolic label/name = EMP */
/* description = arbitrary table */
/* */
/* Output = */
/* */
/* symbolic label/name = SYSPRINT */
/* description = print results via printf */
/* */
/* Exit-normal = return code 0 normal completion */
/* */
/* Exit-error = */
/* */
/* Return code = SQLCA */
/* */
/* Abend codes = none */
/* */
/* External references = none */
/* */
/* Control-blocks = */
/* SQLCA - sql communication area */
/* */
#include "stdio.h"
#include "stdefs.h"
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
EXEC SQL BEGIN DECLARE SECTION;
short edlevel;
struct { short len;
char x1[56];
} stmtbf1, stmtbf2, inpstr;
struct { short len;
char x1[15];
} lname;
short hv1;
struct { char deptno[4];
struct { short len;
char x[36];
} deptname;
char mgrno[7];
char admrdept[4];
} hv2;
short ind[4];
EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE EMP TABLE
(EMPNO CHAR(6) ,
FIRSTNAME VARCHAR(12) ,
MIDINIT CHAR(1) ,
LASTNAME VARCHAR(15) ,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) ,
HIREDATE DECIMAL(6) ,
JOBCODE DECIMAL(3) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
BIRTHDATE DECIMAL(6) ,
SALARY DECIMAL(8,2) ,
FORFNAME VARGRAPHIC(12) ,
FORMNAME GRAPHIC(1) ,
FORLNAME VARGRAPHIC(15) ,
FORADDR VARGRAPHIC(256) ) ;
%DRAW object-name (
SSID=ssid SELECT
TYPE= INSERT
UPDATE
LOAD
DRAW parameters:
object-name
The name of the table or view for which DRAW builds an SQL statement or
utility control statement. The name can be a one-, two-, or three-part name. The
table or view to which object-name refers must exist before DRAW can run.
object-name is a required parameter.
SSID=ssid
Specifies the name of the local DB2 subsystem.
S can be used as an abbreviation for SSID.
If you invoke DRAW from the command line of the edit session in SPUFI,
SSID=ssid is an optional parameter. DRAW uses the subsystem ID from the
DB2I Defaults panel.
TYPE=operation-type
The type of statement that DRAW builds.
T can be used as an abbreviation for TYPE.
operation-type has one of the following values:
SELECT Builds a SELECT statement in which the result table contains
all columns of object-name.
S can be used as an abbreviation for SELECT.
INSERT Builds a template for an INSERT statement that inserts values
into all columns of object-name. The template contains
comments that indicate where the user can place column
values.
I can be used as an abbreviation for INSERT.
UPDATE Builds a template for an UPDATE statement that updates
columns of object-name. The template contains comments that
indicate where the user can place column values and qualify
the update operation for selected rows.
U can be used as an abbreviation for UPDATE.
LOAD Builds a template for a LOAD utility control statement for
object-name.
L can be used as an abbreviation for LOAD.
Generate a template for an INSERT statement that inserts values into table
DSN8810.EMP at location SAN_JOSE. The local subsystem ID is DSN.
Generate a LOAD control statement to load values into table DSN8810.EMP. The
local subsystem ID is DSN.
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
*****************************************************************
* *
* MODULE NAME = TWOPHASE *
* *
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION USING *
* TWO PHASE COMMIT AND THE DRDA DISTRIBUTED *
* ACCESS METHOD *
* *
* COPYRIGHT = 5665-DB2 (C) COPYRIGHT IBM CORP 1982, 1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G120-2083 *
* *
* STATUS = VERSION 5 *
* *
* FUNCTION = THIS MODULE DEMONSTRATES DISTRIBUTED DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1 AND *
* STLEC2. *
* *
* MODULE TYPE = COBOL PROGRAM *
* PROCESSOR = DB2 PRECOMPILER, VS COBOL II *
* MODULE SIZE = SEE LINK EDIT *
* ATTRIBUTES = NOT REENTRANT OR REUSABLE *
* *
* ENTRY POINT = *
* PURPOSE = TO ILLUSTRATE 2 PHASE COMMIT *
* LINKAGE = INVOKE FROM DSN RUN *
* INPUT = NONE *
* OUTPUT = *
* SYMBOLIC LABEL/NAME = SYSPRINT *
* DESCRIPTION = PRINT OUT THE DESCRIPTION OF EACH *
* STEP AND THE RESULTANT SQLCA *
* *
* EXIT NORMAL = RETURN CODE 0 FROM NORMAL COMPLETION *
* *
* EXIT ERROR = NONE *
* *
* EXTERNAL REFERENCES = *
* ROUTINE SERVICES = NONE *
* DATA-AREAS = NONE *
* CONTROL-BLOCKS = *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES = NONE *
* *
* CHANGE-ACTIVITY = NONE *
* *
* *
* *
Figure 241. Sample COBOL two-phase commit application for DRDA access (Part 1 of 8)
Figure 241. Sample COBOL two-phase commit application for DRDA access (Part 2 of 8)
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
FILE SECTION.
FD PRINTER
RECORD CONTAINS 120 CHARACTERS
DATA RECORD IS PRT-TC-RESULTS
LABEL RECORD IS OMITTED.
01 PRT-TC-RESULTS.
03 PRT-BLANK PIC X(120).
Figure 241. Sample COBOL two-phase commit application for DRDA access (Part 3 of 8)
*****************************************************************
* Variable declarations *
*****************************************************************
01 H-EMPTBL.
05 H-EMPNO PIC X(6).
05 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
05 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
05 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
05 H-EMPLOC PIC X(4).
05 H-SSNO PIC X(11).
05 H-BORN PIC X(10).
05 H-SEX PIC X(1).
05 H-HIRED PIC X(10).
05 H-DEPTNO PIC X(3).
05 H-JOBCODE PIC S9(3)V COMP-3.
05 H-SRATE PIC S9(5) COMP.
05 H-EDUC PIC S9(5) COMP.
05 H-SAL PIC S9(6)V9(2) COMP-3.
05 H-VALIDCHK PIC S9(6)V COMP-3.
01 H-EMPTBL-IND-TABLE.
02 H-EMPTBL-IND PIC S9(4) COMP OCCURS 15 TIMES.
*****************************************************************
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
*****************************************************************
*****************************************************************
* Declaration for the table that contains employee information *
*****************************************************************
Figure 241. Sample COBOL two-phase commit application for DRDA access (Part 4 of 8)
*****************************************************************
* Constants *
*****************************************************************
*****************************************************************
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
*****************************************************************
PROCEDURE DIVISION.
A101-HOUSE-KEEPING.
OPEN OUTPUT PRINTER.
*****************************************************************
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2 *
* using the information obtained from STLEC1. *
*****************************************************************
MAINLINE.
PERFORM CONNECT-TO-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM UPDATE-ADDRESS
PERFORM CONNECT-TO-SITE-2
IF SQLCODE IS EQUAL TO 0
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
Figure 241. Sample COBOL two-phase commit application for DRDA access (Part 5 of 8)
*****************************************************************
* Establish a connection to STLEC1 *
*****************************************************************
CONNECT-TO-SITE-1.
*****************************************************************
* When a connection has been established successfully at STLEC1,*
* open the cursor that will be used to retrieve information *
* about the transferring employee. *
*****************************************************************
PROCESS-CURSOR-SITE-1.
*****************************************************************
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
*****************************************************************
FETCH-DELETE-SITE-1.
Figure 241. Sample COBOL two-phase commit application for DRDA access (Part 6 of 8)
DELETE-SITE-1.
*****************************************************************
* Close the cursor used to retrieve information about the *
* transferring employee. *
*****************************************************************
CLOSE-CURSOR-SITE-1.
*****************************************************************
* Update certain employee information in order to make it *
* current. *
*****************************************************************
UPDATE-ADDRESS.
MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN.
MOVE ’1500 NEW STREET’ TO H-ADDRESS-DA.
MOVE TEMP-CITY-LN TO H-CITY-LN.
MOVE ’NEW CITY, CA 97804’ TO H-CITY-DA.
MOVE ’SJCA’ TO H-EMPLOC.
*****************************************************************
* Establish a connection to STLEC2 *
*****************************************************************
CONNECT-TO-SITE-2.
Figure 241. Sample COBOL two-phase commit application for DRDA access (Part 7 of 8)
PROCESS-SITE-2.
*****************************************************************
* COMMIT any changes that were made at STLEC1 and STLEC2. *
*****************************************************************
COMMIT-WORK.
*****************************************************************
* Include COBOL standard language procedures *
*****************************************************************
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
Figure 241. Sample COBOL two-phase commit application for DRDA access (Part 8 of 8)
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
*****************************************************************
* *
* MODULE NAME = TWOPHASE *
* *
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION USING *
* TWO PHASE COMMIT AND DB2 PRIVATE PROTOCOL *
* DISTRIBUTED ACCESS METHOD *
* *
* COPYRIGHT = 5665-DB2 (C) COPYRIGHT IBM CORP 1982, 1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G120-2083 *
* *
* STATUS = VERSION 5 *
* *
* FUNCTION = THIS MODULE DEMONSTRATES DISTRIBUTED DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1 AND *
* STLEC2. *
* *
* MODULE TYPE = COBOL PROGRAM *
* PROCESSOR = DB2 PRECOMPILER, VS COBOL II *
* MODULE SIZE = SEE LINK EDIT *
* ATTRIBUTES = NOT REENTRANT OR REUSABLE *
* *
* ENTRY POINT = *
* PURPOSE = TO ILLUSTRATE 2 PHASE COMMIT *
* LINKAGE = INVOKE FROM DSN RUN *
* INPUT = NONE *
* OUTPUT = *
* SYMBOLIC LABEL/NAME = SYSPRINT *
* DESCRIPTION = PRINT OUT THE DESCRIPTION OF EACH *
* STEP AND THE RESULTANT SQLCA *
* *
* EXIT NORMAL = RETURN CODE 0 FROM NORMAL COMPLETION *
* *
* EXIT ERROR = NONE *
* *
* EXTERNAL REFERENCES = *
* ROUTINE SERVICES = NONE *
* DATA-AREAS = NONE *
* CONTROL-BLOCKS = *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES = NONE *
* *
* CHANGE-ACTIVITY = NONE *
* *
* *
Figure 242. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 1 of 7)
Figure 242. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 2 of 7)
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
FILE SECTION.
FD PRINTER
RECORD CONTAINS 120 CHARACTERS
DATA RECORD IS PRT-TC-RESULTS
LABEL RECORD IS OMITTED.
01 PRT-TC-RESULTS.
03 PRT-BLANK PIC X(120).
WORKING-STORAGE SECTION.
*****************************************************************
* Variable declarations *
*****************************************************************
01 H-EMPTBL.
05 H-EMPNO PIC X(6).
05 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
05 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
05 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
05 H-EMPLOC PIC X(4).
05 H-SSNO PIC X(11).
05 H-BORN PIC X(10).
05 H-SEX PIC X(1).
05 H-HIRED PIC X(10).
05 H-DEPTNO PIC X(3).
05 H-JOBCODE PIC S9(3)V COMP-3.
05 H-SRATE PIC S9(5) COMP.
05 H-EDUC PIC S9(5) COMP.
05 H-SAL PIC S9(6)V9(2) COMP-3.
05 H-VALIDCHK PIC S9(6)V COMP-3.
Figure 242. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 3 of 7)
*****************************************************************
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
*****************************************************************
*****************************************************************
* Declaration for the table that contains employee information *
*****************************************************************
*****************************************************************
* Constants *
*****************************************************************
*****************************************************************
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
*****************************************************************
Figure 242. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 4 of 7)
*****************************************************************
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2 *
* using the information obtained from STLEC1. *
*****************************************************************
MAINLINE.
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM UPDATE-ADDRESS
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
PROG-END.
CLOSE PRINTER.
GOBACK.
*****************************************************************
* Open the cursor that will be used to retrieve information *
* about the transferring employee. *
*****************************************************************
PROCESS-CURSOR-SITE-1.
*****************************************************************
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
*****************************************************************
FETCH-DELETE-SITE-1.
Figure 242. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 5 of 7)
*****************************************************************
* Delete the employee from STLEC1. *
*****************************************************************
DELETE-SITE-1.
*****************************************************************
* Close the cursor used to retrieve information about the *
* transferring employee. *
*****************************************************************
CLOSE-CURSOR-SITE-1.
*****************************************************************
* Update certain employee information in order to make it *
* current. *
*****************************************************************
UPDATE-ADDRESS.
MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN.
MOVE ’1500 NEW STREET’ TO H-ADDRESS-DA.
MOVE TEMP-CITY-LN TO H-CITY-LN.
MOVE ’NEW CITY, CA 97804’ TO H-CITY-DA.
MOVE ’SJCA’ TO H-EMPLOC.
Figure 242. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 6 of 7)
PROCESS-SITE-2.
*****************************************************************
* COMMIT any changes that were made at STLEC1 and STLEC2. *
*****************************************************************
COMMIT-WORK.
*****************************************************************
* Include COBOL standard language procedures *
*****************************************************************
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
Figure 242. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 7 of 7)
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
main()
{
/************************************************************/
/* Include the SQLCA and SQLDA */
/************************************************************/
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
/************************************************************/
/* Declare variables that are not SQL-related. */
/************************************************************/
short int i; /* Loop counter */
/************************************************************/
/* Declare the following: */
/* - Parameters used to call stored procedure GETPRML */
/* - An SQLDA for DESCRIBE PROCEDURE */
/* - An SQLDA for DESCRIBE CURSOR */
/* - Result set variable locators for up to three result */
/* sets */
/************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char procnm[19]; /* INPUT parm -- PROCEDURE name */
char schema[9]; /* INPUT parm -- User’s schema */
long int out_code; /* OUTPUT -- SQLCODE from the */
/* SELECT operation. */
struct {
short int parmlen;
char parmtxt[254];
} parmlst; /* OUTPUT -- RUNOPTS values */
/* for the matching row in */
/* catalog table SYSROUTINES */
struct indicators {
short int procnm_ind;
short int schema_ind;
short int out_code_ind;
short int parmlst_ind;
} parmind;
/* Indicator variable structure */
/************************************************************/
/* Call the GETPRML stored procedure to retrieve the */
/* RUNOPTS values for the stored procedure. In this */
/* example, we request the PARMLIST definition for the */
/* stored procedure named DSN8EP2. */
/* */
/* The call should complete with SQLCODE +466 because */
/* GETPRML returns result sets. */
/************************************************************/
strcpy(procnm,"dsn8ep2 ");
/* Input parameter -- PROCEDURE to be found */
strcpy(schema," ");
/* Input parameter -- Schema name for proc */
parmind.procnm_ind=0;
parmind.schema_ind=0;
parmind.out_code_ind=0;
/* Indicate that none of the input parameters */
/* have null values */
parmind.parmlst_ind=-1;
/* The parmlst parameter is an output parm. */
/* Mark PARMLST parameter as null, so the DB2 */
/* requester doesn’t have to send the entire */
/* PARMLST variable to the server. This */
/* helps reduce network I/O time, because */
/* PARMLST is fairly large. */
EXEC SQL
CALL GETPRML(:procnm INDICATOR :parmind.procnm_ind,
:schema INDICATOR :parmind.schema_ind,
:out_code INDICATOR :parmind.out_code_ind,
:parmlst INDICATOR :parmind.parmlst_ind);
if(SQLCODE!=+466) /* If SQL CALL failed, */
{
/* print the SQLCODE and any */
/* message tokens */
printf("SQL CALL failed due to SQLCODE = %d\n",SQLCODE);
printf("sqlca.sqlerrmc = ");
for(i=0;i<sqlca.sqlerrml;i++)
printf("%c",sqlca.sqlerrmc[i]);
printf("\n");
}
/********************************************************/
/* Use the statement DESCRIBE PROCEDURE to */
/* return information about the result sets in the */
/* SQLDA pointed to by proc_da: */
/* - SQLD contains the number of result sets that were */
/* returned by the stored procedure. */
/* - Each SQLVAR entry has the following information */
/* about a result set: */
/* - SQLNAME contains the name of the cursor that */
/* the stored procedure uses to return the result */
/* set. */
/* - SQLIND contains an estimate of the number of */
/* rows in the result set. */
/* - SQLDATA contains the result locator value for */
/* the result set. */
/********************************************************/
EXEC SQL DESCRIBE PROCEDURE INTO :*proc_da;
/********************************************************/
/* Assume that you have examined SQLD and determined */
/* that there is one result set. Use the statement */
/* ASSOCIATE LOCATORS to establish a result set locator */
/* for the result set. */
/********************************************************/
EXEC SQL ASSOCIATE LOCATORS (:loc1) WITH PROCEDURE GETPRML;
/********************************************************/
/* Use the statement ALLOCATE CURSOR to associate a */
/* cursor for the result set. */
/********************************************************/
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1;
/********************************************************/
/* Use the statement DESRIBE CURSOR to determine the */
/* columns in the result set. */
/********************************************************/
EXEC SQL DESCRIBE CURSOR C1 INTO :*res_da;
/********************************************************/
/* Fetch the data from the result table. */
/********************************************************/
while(SQLCODE==0)
EXEC SQL FETCH C1 USING DESCRIPTOR :*res_da;
}
return;
}
ENVIRONMENT DIVISION.
CONFIGURATION SECTION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT REPOUT
ASSIGN TO UT-S-SYSPRINT.
DATA DIVISION.
FILE SECTION.
FD REPOUT
RECORD CONTAINS 127 CHARACTERS
LABEL RECORDS ARE OMITTED
DATA RECORD IS REPREC.
01 REPREC PIC X(127).
WORKING-STORAGE SECTION.
*****************************************************
* MESSAGES FOR SQL CALL *
*****************************************************
01 SQLREC.
02 BADMSG PIC X(34) VALUE
’ SQL CALL FAILED DUE TO SQLCODE = ’.
02 BADCODE PIC +9(5) USAGE DISPLAY.
02 FILLER PIC X(80) VALUE SPACES.
01 ERRMREC.
02 ERRMMSG PIC X(12) VALUE ’ SQLERRMC = ’.
02 ERRMCODE PIC X(70).
02 FILLER PIC X(38) VALUE SPACES.
01 CALLREC.
02 CALLMSG PIC X(28) VALUE
’ GETPRML FAILED DUE TO RC = ’.
02 CALLCODE PIC +9(5) USAGE DISPLAY.
02 FILLER PIC X(42) VALUE SPACES.
01 RSLTREC.
02 RSLTMSG PIC X(15) VALUE
’ TABLE NAME IS ’.
02 TBLNAME PIC X(18) VALUE SPACES.
02 FILLER PIC X(87) VALUE SPACES.
*****************************************************
* SQL INCLUDE FOR SQLCA *
*****************************************************
EXEC SQL INCLUDE SQLCA END-EXEC.
PROCEDURE DIVISION.
*------------------
PROG-START.
OPEN OUTPUT REPOUT.
* OPEN OUTPUT FILE
MOVE ’DSN8EP2 ’ TO PROCNM.
* INPUT PARAMETER -- PROCEDURE TO BE FOUND
MOVE SPACES TO SCHEMA.
* INPUT PARAMETER -- SCHEMA IN SYSROUTINES
MOVE -1 TO PARMIND.
* THE PARMLST PARAMETER IS AN OUTPUT PARM.
* MARK PARMLST PARAMETER AS NULL, SO THE DB2
* REQUESTER DOESN’T HAVE TO SEND THE ENTIRE
* PARMLST VARIABLE TO THE SERVER. THIS
* HELPS REDUCE NETWORK I/O TIME, BECAUSE
* PARMLST IS FAIRLY LARGE.
EXEC SQL
CALL GETPRML(:PROCNM,
:SCHEMA,
:OUT-CODE,
:PARMLST INDICATOR :PARMIND)
END-EXEC.
/************************************************************/
/* Declare the parameters used to call the GETPRML */
/* stored procedure. */
/************************************************************/
DECLARE PROCNM CHAR(18), /* INPUT parm -- PROCEDURE name */
SCHEMA CHAR(8), /* INPUT parm -- User’s schema */
OUT_CODE FIXED BIN(31),
/* OUTPUT -- SQLCODE from the */
/* SELECT operation. */
PARMLST CHAR(254) /* OUTPUT -- RUNOPTS for */
VARYING, /* the matching row in the */
/* catalog table SYSROUTINES */
PARMIND FIXED BIN(15);
/* PARMLST indicator variable */
/************************************************************/
/* Include the SQLCA */
/************************************************************/
EXEC SQL INCLUDE SQLCA;
/************************************************************/
/* Call the GETPRML stored procedure to retrieve the */
/* RUNOPTS values for the stored procedure. In this */
/* example, we request the RUNOPTS values for the */
/* stored procedure named DSN8EP2. */
/************************************************************/
PROCNM = ’DSN8EP2’;
/* Input parameter -- PROCEDURE to be found */
SCHEMA = ’ ’;
/* Input parameter -- SCHEMA in SYSROUTINES */
PARMIND = -1; /* The PARMLST parameter is an output parm. */
/* Mark PARMLST parameter as null, so the DB2 */
/* requester doesn’t have to send the entire */
/* PARMLST variable to the server. This */
/* helps reduce network I/O time, because */
/* PARMLST is fairly large. */
EXEC SQL
CALL GETPRML(:PROCNM,
:SCHEMA,
:OUT_CODE,
:PARMLST INDICATOR :PARMIND);
The output parameters from this stored procedure contain the SQLCODE from the
SELECT statement and the value of the RUNOPTS column from SYSROUTINES.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE C
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL
STAY RESIDENT NO
RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)"
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
/***************************************************************/
/* Declare C variables for SQL operations on the parameters. */
/* These are local variables to the C program, which you must */
/* copy to and from the parameter list provided to the stored */
/* procedure. */
/***************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char PROCNM[19];
char SCHEMA[9];
char PARMLST[255];
EXEC SQL END DECLARE SECTION;
/***************************************************************/
/* Declare cursors for returning result sets to the caller. */
/***************************************************************/
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME
FROM SYSIBM.SYSTABLES
WHERE CREATOR=:SCHEMA;
main(argc,argv)
int argc;
char *argv[];
{
/********************************************************/
/* Copy the input parameters into the area reserved in */
/* the program for SQL processing. */
/********************************************************/
strcpy(PROCNM, argv[1]);
strcpy(SCHEMA, argv[2]);
/********************************************************/
/* Issue the SQL SELECT against the SYSROUTINES */
/* DB2 catalog table. */
/********************************************************/
strcpy(PARMLST, ""); /* Clear PARMLST */
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.ROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
/********************************************************/
/* Copy the PARMLST value returned by the SELECT back to*/
/* the parameter list provided to this stored procedure.*/
/********************************************************/
strcpy(argv[4], PARMLST);
/********************************************************/
/* Open cursor C1 to cause DB2 to return a result set */
/* to the caller. */
/********************************************************/
EXEC SQL OPEN C1;
}
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE C
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL WITH NULLS
STAY RESIDENT NO
RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)"
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
/***************************************************************/
/* Declare C variables used for SQL operations on the */
/* parameters. These are local variables to the C program, */
/* which you must copy to and from the parameter list provided */
/* to the stored procedure. */
/***************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char PROCNM[19];
char SCHEMA[9];
char PARMLST[255];
struct INDICATORS {
short int PROCNM_IND;
short int SCHEMA_IND;
short int OUT_CODE_IND;
short int PARMLST_IND;
} PARM_IND;
EXEC SQL END DECLARE SECTION;
/***************************************************************/
/* Declare cursors for returning result sets to the caller. */
/***************************************************************/
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME
FROM SYSIBM.SYSTABLES
WHERE CREATOR=:SCHEMA;
main(argc,argv)
int argc;
char *argv[];
{
/********************************************************/
/* Copy the input parameters into the area reserved in */
/* the local program for SQL processing. */
/********************************************************/
strcpy(PROCNM, argv[1]);
strcpy(SCHEMA, argv[2]);
/********************************************************/
/* Copy null indicator values for the parameter list. */
/********************************************************/
memcpy(&PARM_IND,(struct INDICATORS *) argv[5],
sizeof(PARM_IND));
Figure 247. A C stored procedure with linkage convention GENERAL WITH NULLS (Part 1 of
2)
else {
/********************************************************/
/* If the input parameters are not NULL, issue the SQL */
/* SELECT against the SYSIBM.SYSROUTINES catalog */
/* table. */
/********************************************************/
strcpy(PARMLST, ""); /* Clear PARMLST */
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
/********************************************************/
/* Copy SQLCODE to the output parameter list. */
/********************************************************/
*(int *) argv[3] = SQLCODE;
PARM_IND.OUT_CODE_IND = 0; /* OUT_CODE is not NULL */
}
/********************************************************/
/* Copy the RUNOPTS value back to the output parameter */
/* area. */
/********************************************************/
strcpy(argv[4], PARMLST);
/********************************************************/
/* Copy the null indicators back to the output parameter*/
/* area. */
/********************************************************/
memcpy((struct INDICATORS *) argv[5],&PARM_IND,
sizeof(PARM_IND));
/********************************************************/
/* Open cursor C1 to cause DB2 to return a result set */
/* to the caller. */
/********************************************************/
EXEC SQL OPEN C1;
}
Figure 247. A C stored procedure with linkage convention GENERAL WITH NULLS (Part 2 of
2)
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE COBOL
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL
STAY RESIDENT NO
RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)"
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
DATA DIVISION.
FILE SECTION.
WORKING-STORAGE SECTION.
***************************************************
* DECLARE CURSOR FOR RETURNING RESULT SETS
***************************************************
*
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA
END-EXEC.
*
LINKAGE SECTION.
***************************************************
* DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE
***************************************************
01 PROCNM PIC X(18).
01 SCHEMA PIC X(8).
*******************************************************
* DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE
*******************************************************
01 OUT-CODE PIC S9(9) USAGE BINARY.
01 PARMLST.
49 PARMLST-LEN PIC S9(4) USAGE BINARY.
49 PARMLST-TEXT PIC X(254).
Figure 248. A COBOL stored procedure with linkage convention GENERAL (Part 1 of 2)
*******************************************************
* COPY SQLCODE INTO THE OUTPUT PARAMETER AREA
*******************************************************
MOVE SQLCODE TO OUT-CODE.
*******************************************************
* OPEN CURSOR C1 TO CAUSE DB2 TO RETURN A RESULT SET
* TO THE CALLER.
*******************************************************
EXEC SQL OPEN C1
END-EXEC.
PROG-END.
GOBACK.
Figure 248. A COBOL stored procedure with linkage convention GENERAL (Part 2 of 2)
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE COBOL
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL WITH NULLS
STAY RESIDENT NO
RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)"
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
DATA DIVISION.
FILE SECTION.
*
WORKING-STORAGE SECTION.
*
EXEC SQL INCLUDE SQLCA END-EXEC.
*
***************************************************
* DECLARE A HOST VARIABLE TO HOLD INPUT SCHEMA
***************************************************
01 INSCHEMA PIC X(8).
***************************************************
* DECLARE CURSOR FOR RETURNING RESULT SETS
***************************************************
*
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA
END-EXEC.
*
LINKAGE SECTION.
***************************************************
* DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE
***************************************************
01 PROCNM PIC X(18).
01 SCHEMA PIC X(8).
***************************************************
* DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE
***************************************************
01 OUT-CODE PIC S9(9) USAGE BINARY.
01 PARMLST.
49 PARMLST-LEN PIC S9(4) USAGE BINARY.
49 PARMLST-TEXT PIC X(254).
***************************************************
* DECLARE THE STRUCTURE CONTAINING THE NULL
* INDICATORS FOR THE INPUT AND OUTPUT PARAMETERS.
***************************************************
01 IND-PARM.
03 PROCNM-IND PIC S9(4) USAGE BINARY.
03 SCHEMA-IND PIC S9(4) USAGE BINARY.
03 OUT-CODE-IND PIC S9(4) USAGE BINARY.
03 PARMLST-IND PIC S9(4) USAGE BINARY.
Figure 249. A COBOL stored procedure with linkage convention GENERAL WITH NULLS
(Part 1 of 2)
Figure 249. A COBOL stored procedure with linkage convention GENERAL WITH NULLS
(Part 2 of 2)
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE PLI
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
*PROCESS SYSTEM(MVS);
GETPRML:
PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST)
OPTIONS(MAIN NOEXECOPS REENTRANT);
/************************************************************/
/* Execute SELECT from SYSIBM.SYSROUTINES in the catalog. */
/************************************************************/
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE PLI
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
|
*PROCESS SYSTEM(MVS);
GETPRML:
PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST, INDICATORS)
OPTIONS(MAIN NOEXECOPS REENTRANT);
IF PROCNM_IND<0 |
SCHEMA_IND<0 THEN
DO; /* If any input parm is NULL, */
OUT_CODE = 9999; /* Set output return code. */
OUT_CODE_IND = 0;
/* Output return code is not NULL.*/
PARMLST_IND = -1; /* Assign NULL value to PARMLST. */
END;
ELSE /* If input parms are not NULL, */
DO; /* */
/************************************************************/
/* Issue the SQL SELECT against the SYSIBM.SYSROUTINES */
/* DB2 catalog table. */
/************************************************************/
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
PARMLST_IND = 0; /* Mark PARMLST as not NULL. */
END GETPRML;
Figure 251. A PL/I stored procedure with linkage convention GENERAL WITH NULLS
| Assume that the PARTLIST table is populated with the values that are in Table 190:
| Table 190. PARTLIST table
| PART SUBPART QUANTITY
| 00 01 5
| 00 05 3
| 01 02 2
| 01 03 3
| 01 04 4
| 01 06 3
| 02 05 7
| 02 06 6
| 03 07 6
| 04 08 10
| 04 09 11
| 05 10 10
| 05 11 10
| 06 12 10
| 06 13 10
| 07 14 8
| 07 12 8
|
| Example 1: Single level explosion: Single level explosion answers the question,
| ″What parts are needed to build the part identified by ’01’?″. The list will include the
| direct subparts, subparts of the subparts and so on. However, if a part is used
| multiple times, its subparts are only listed once.
| WITH RPL (PART, SUBPART, QUANTITY) AS
| (SELECT ROOT.PART, ROOT.SUBPART, ROOT.QUANTITY
| FROM PARTLIST ROOT
| WHERE ROOT.PART = ’01’
| UNION ALL
| SELECT CHILD.PART, CHILD.SUBPART, CHILD.QUANTITY
| FROM RPL PARENT, PARTLIST CHILD
| The preceding query includes a common table expression, identified by the name
| RPL, that expresses the recursive part of this query. It illustrates the basic elements
| of a recursive common table expression.
| The first operand (fullselect) of the UNION, referred to as the initialization fullselect,
| gets the direct subparts of part ’01’. The FROM clause of this fullselect refers to the
| source table and will never refer to itself (RPL in this case). The result of this first
| fullselect goes into the common table expression RPL. As in this example, the
| UNION must always be a UNION ALL.
| The second operand (fullselect) of the UNION uses RPL to compute subparts of
| subparts by using the FROM clause to refer to the common table expression RPL
| and the source table PARTLIST with a join of a part from the source table (child) to
| a subpart of the current result contained in RPL (parent). The result goes then back
| to RPL again. The second operand of UNION is used repeatedly until no more
| subparts exist.
| The SELECT DISTINCT in the main fullselect of this query ensures the same
| part/subpart is not listed more than once.
| Observe in the result that part ’01’ contains subpart ’02’ which contains subpart ’06’
| and so on. Further, notice that part ’06’ is reached twice, once through part ’01’
| directly and another time through part ’02’. In the output, however, the subparts of
| part ’06’ are listed only once (this is the result of using a SELECT DISTINCT).
| This infinite loop is created by not coding what is intended. You should carefully
| determining what to code so that there is a definite end of the recursion cycle.
| In the preceding query, the select list of the second operand of the UNION in the
| recursive common table expression, identified by the name RPL, shows the
| aggregation of the quantity. To determine how many of each subpart is used, the
| quantity of the parent is multiplied by the quantity per parent of a child. If a part is
| used multiple times in different places, it requires another final aggregation. This is
| done by the grouping the parts and subparts in the common table expression RPL
| and using the SUM column function in the select list of the main fullselect.
| Consider the total quantity for subpart ’06’. The value of 15 is derived from a
| quantity of 3 directly for part ’01’ and a quantity of 6 for part ’02’ which is needed
| two times by part ’01’.
| Example 3: Controlling depth: You can control the depth of a recursive query to
| answer the question, ″What are the first two levels of parts that are needed to build
| part ’01’?″ For the sake of clarity in this example, the level of each part is included
| in the result table.
| WITH RPL (LEVEL, PART, SUBPART, QUANTITY) AS
| (
| SELECT 1, ROOT.PART, ROOT.SUBPART, ROOT.QUANTITY
| FROM PARTLIST ROOT
| WHERE ROOT.PART = ’01’
| UNION ALL
| SELECT PARENT.LEVEL+1, CHILD.PART, CHILD.SUBPART, CHILD.QUANTITY
| FROM RPL PARENT, PARTLIST CHILD
| WHERE PARENT.SUBPART = CHILD.PART
| AND PARENT.LEVEL < 2
| )
| SELECT PART, LEVEL, SUBPART, QUANTITY
| FROM RPL;
| This query is similar to the query in example 1. The column LEVEL is introduced to
| count the level each subpart is from the original part. In the initialization fullselect,
| the value for the LEVEL column is initialized to 1. In the subsequent fullselect, the
| level from the parent table increments by 1. To control the number of levels in the
| result, the second fullselect includes the condition that the level of the parent must
| be less than 2. This ensures that the second fullselect only processes children to
| the second level.
One situation in which this technique might be useful is when a resource becomes
unavailable during a rebind of many plans or packages. DB2 normally terminates
the rebind and does not rebind the remaining plans or packages. Later, however,
you might want to rebind only the objects that remain to be rebound. You can build
REBIND subcommands for the remaining plans or packages by using DSNTIAUL to
select the plans or packages from the DB2 catalog and to create the REBIND
subcommands. You can then submit the subcommands through the DSN command
processor, as usual.
You might first need to edit the output from DSNTIAUL so that DSN can accept it as
input. The CLIST DSNTEDIT can perform much of that task for you.
For both REBIND PLAN and REBIND PACKAGE subcommands, add the DSN
command that the statement needs as the first line in the sequential data set, and
add END as the last line, using TSO edit commands. When you have edited the
sequential data set, you can run it to rebind the selected plans or packages.
If the SELECT statement returns no qualifying rows, then DSNTIAUL does not
generate REBIND subcommands.
The examples in this section generate REBIND subcommands that work in DB2
UDB for z/OS Version 8. You might need to modify the examples for prior releases
of DB2 that do not allow all of the same syntax.
Example 1:
REBIND all plans without terminating because of unavailable resources.
SELECT SUBSTR(’REBIND PLAN(’CONCAT NAME
CONCAT’) ’,1,45)
FROM SYSIBM.SYSPLAN;
Example 2:
REBIND all versions of all packages without terminating because of
unavailable resources.
SELECT SUBSTR(’REBIND PACKAGE(’CONCAT COLLID CONCAT’.’
CONCAT NAME CONCAT’.(*)) ’,1,55)
FROM SYSIBM.SYSPACKAGE;
Example 3:
REBIND all plans bound before a given date and time.
SELECT SUBSTR(’REBIND PLAN(’CONCAT NAME
CONCAT’) ’,1,45)
FROM SYSIBM.SYSPLAN
WHERE BINDDATE <= ’yyyymmdd’ AND
BINDTIME <= ’hhmmssth’;
where yyyymmdd represents the date portion and hhmmssth represents the
time portion of the timestamp string.
Example 4:
REBIND all versions of all packages bound before a given date and time.
SELECT SUBSTR(’REBIND PACKAGE(’CONCAT COLLID CONCAT’.’
CONCAT NAME CONCAT’.(*)) ’,1,55)
FROM SYSIBM.SYSPACKAGE
WHERE BINDTIME <= ’timestamp’;
where yyyymmdd represents the date portion and hhmmssth represents the
time portion of the timestamp string.
Example 6:
REBIND all versions of all packages bound since a given date and time.
SELECT SUBSTR(’REBIND PACKAGE(’CONCAT COLLID
CONCAT’.’CONCAT NAME
CONCAT’.(*)) ’,1,55)
FROM SYSIBM.SYSPACKAGE
WHERE BINDTIME >= ’timestamp’;
where yyyymmdd represents the date portion and hhmmssth represents the
time portion of the timestamp string.
Example 8:
REBIND all versions of all packages bound within a given date and time
range.
SELECT SUBSTR(’REBIND PACKAGE(’CONCAT COLLID CONCAT’.’
CONCAT NAME CONCAT’.(*)) ’,1,55)
FROM SYSIBM.SYSPACKAGE
WHERE BINDTIME >= ’timestamp1’ AND
BINDTIME <= ’timestamp2’;
| You specify the date and time period for which you want packages to be rebound in
| a WHERE clause of the SELECT statement that contains the REBIND command. In
| Figure 252, the WHERE clause looks like the following clause:
| WHERE BINDTIME >= ’YYYY-MM-DD-hh.mm.ss’ AND
| BINDTIME <= ’YYYY-MM-DD-hh.mm.ss’
Figure 252. Example JCL: Rebind all packages that were bound within a specified date and
time period (Part 1 of 2)
Figure 252. Example JCL: Rebind all packages that were bound within a specified date and
time period (Part 2 of 2)
Figure 253 on page 1008 shows some sample JCL for rebinding all plans bound
without specifying the DEGREE keyword on BIND with DEGREE(ANY).
Figure 253. Example JCL: Rebind selected plans with a different bind option
IBM SQL has additional reserved words that DB2 UDB for z/OS does not enforce.
Therefore, we suggest that you do not use these additional reserved words as
SAVEPOINT6 Y
9
SELECT Y Y
SELECT INTO Y Y
SET CONNECTION Y Y Y
5
SET host-variable Assignment Y Y Y
If the MVS.VARY.* profile exists, or if the specific profile MVS.VARY.WLM exists, the
task ID that is associated with the WLM environment in which WLM_REFRESH
runs must have CONTROL access to it.
See Part 3 (Volume 1) DB2 Administration Guide for information about authorizing
access to SAF resource profiles. See z/OS MVS Planning: Operations for more
information about permitting access to the extended MCS console.
The following syntax diagram shows the SQL CALL statement for invoking
WLM_REFRESH. The linkage convention for WLM_REFRESH is GENERAL WITH
NULLS.
If you use CICS Transaction Server for OS/390 Version 1 Release 3 or later, you
can register your CICS system as a resource manager with recoverable resource
management services (RRMS). When you do that, changes to DB2 databases that
are made by the program that calls DSNACICS and the CICS server program that
DSNACICS invokes are in the same two-phase commit scope. This means that
when the calling program performs an SQL COMMIT or ROLLBACK, DB2 and RRS
inform CICS about the COMMIT or ROLLBACK.
If the CICS server program that DSNACICS invokes accesses DB2 resources, the
server program runs under a separate unit of work from the original unit of work
that calls the stored procedure. This means that the CICS server program might
deadlock with locks that the client program acquires.
The CICS server program that DSNACICS calls runs under the same user ID as
DSNACICS. That user ID depends on the SECURITY parameter that you specify
when you define DSNACICS. See Part 2 of DB2 Installation Guide.
The DSNACICS caller also needs authorization from an external security system,
such as RACF, to use CICS resources. See Part 2 of DB2 Installation Guide.
When CICS has been set up to be an RRS resource manager, the client
application can control commit processing using SQL COMMIT requests. DB2
UDB for z/OS ensures that CICS is notified to commit any resources that the
CICS server program modifies during two-phase commit processing.
When CICS has not been set up to be an RRS resource manager, CICS forces
syncpoint processing of all CICS resources at completion of the CICS server
program. This commit processing is not coordinated with the commit processing
of the client program.
Table 200 shows the contents of the DSNACICX exit parameter list, XPL. Member
DSNDXPL in data set prefix.SDSNMACS contains an assembler language mapping
macro for XPL. Sample exit DSNASCIO in data set prefix.SDSNSAMP includes a
COBOL mapping macro for XPL.
Table 200. Contents of the XPL exit parameter list
Corresponding
DSNACICS
Name Hex offset Data type Description parameter
XPL_EYEC 0 Character, 4 bytes Eye-catcher: 'XPL '
XPL_LEN 4 Character, 4 bytes Length of the exit parameter list
XPL_LEVEL 8 4-byte integer Level of the parameter list parm-level
XPL_PGMNAME C Character, 8 bytes Name of the CICS server pgm-name
program
XPL_CICSAPPLID 14 Character, 8 bytes CICS VTAM applid CICS-applid
XPL_CICSLEVEL 1C 4-byte integer Level of CICS code CICS-level
XPL_CONNECTTYPE 20 Character, 8 bytes Specific or generic connection connect-type
to CICS
XPL_NETNAME 28 Character, 8 bytes Name of the specific connection netname
to CICS
XPL_MIRRORTRAN 30 Character, 8 bytes Name of the mirror transaction mirror-trans
that invokes the CICS server
program
1
XPL_COMMAREAPTR 38 Address, 4 bytes Address of the COMMAREA
2
XPL_COMMINLEN 3C 4–byte integer Length of the COMMAREA that
is passed to the server program
XPL_COMMTOTLEN 40 4–byte integer Total length of the COMMAREA commarea-total-len
that is returned to the caller
XPL_SYNCOPTS 44 4–byte integer Syncpoint control option sync-opts
XPL_RETCODE 48 4–byte integer Return code from the exit return-code
routine
XPL_MSGLEN 4C 4–byte integer Length of the output message return-code
area
XPL_MSGAREA 50 Character, 256 bytes Output message area msg-area3
/***********************************************/
/* INDICATOR VARIABLES FOR DSNACICS PARAMETERS */
/***********************************************/
DECLARE 1 IND_VARS,
3 IND_PARM_LEVEL BIN FIXED(15),
3 IND_PGM_NAME BIN FIXED(15),
3 IND_CICS_APPLID BIN FIXED(15),
3 IND_CICS_LEVEL BIN FIXED(15),
3 IND_CONNECT_TYPE BIN FIXED(15),
3 IND_NETNAME BIN FIXED(15),
3 IND_MIRROR_TRANS BIN FIXED(15),
3 IND_COMMAREA BIN FIXED(15),
3 IND_COMMAREA_TOTAL_LEN BIN FIXED(15),
3 IND_SYNC_OPTS BIN FIXED(15),
3 IND_RETCODE BIN FIXED(15),
3 IND_MSG_AREA BIN FIXED(15);
/**************************/
/* LOCAL COPY OF COMMAREA */
/**************************/
DECLARE P1 POINTER;
DECLARE COMMAREA_STG CHAR(130) VARYING;
/**************************************************************/
/* ASSIGN VALUES TO INPUT PARAMETERS PARM_LEVEL, PGM_NAME, */
/* MIRROR_TRANS, COMMAREA, COMMAREA_TOTAL_LEN, AND SYNC_OPTS. */
/* SET THE OTHER INPUT PARAMETERS TO NULL. THE DSNACICX */
/* USER EXIT MUST ASSIGN VALUES FOR THOSE PARAMETERS. */
PGM_NAME = ’CICSPGM1’;
IND_PGM_NAME = 0 ;
MIRROR_TRANS = ’MIRT’;
IND_MIRROR_TRANS = 0;
P1 = ADDR(COMMAREA_STG);
COMMAREA_INPUT = ’THIS IS THE INPUT FOR CICSPGM1’;
COMMAREA_OUTPUT = ’ ’;
COMMAREA_LEN = LENGTH(COMMAREA_INPUT);
IND_COMMAREA = 0;
SYNC_OPTS = 1;
IND_SYNC_OPTS = 0;
IND_CICS_APPLID= -1;
IND_CICS_LEVEL = -1;
IND_CONNECT_TYPE = -1;
IND_NETNAME = -1;
/*****************************************/
/* INITIALIZE OUTPUT PARAMETERS TO NULL. */
/*****************************************/
IND_RETCODE = -1;
IND_MSG_AREA= -1;
/*****************************************/
/* CALL DSNACICS TO INVOKE CICSPGM1. */
/*****************************************/
EXEC SQL
CALL SYSPROC.DSNACICS(:PARM_LEVEL :IND_PARM_LEVEL,
:PGM_NAME :IND_PGM_NAME,
:CICS_APPLID :IND_CICS_APPLID,
:CICS_LEVEL :IND_CICS_LEVEL,
:CONNECT_TYPE :IND_CONNECT_TYPE,
:NETNAME :IND_NETNAME,
:MIRROR_TRANS :IND_MIRROR_TRANS,
:COMMAREA_STG :IND_COMMAREA,
:COMMAREA_TOTAL_LEN :IND_COMMAREA_TOTAL_LEN,
:SYNC_OPTS :IND_SYNC_OPTS,
:RET_CODE :IND_RETCODE,
:MSG_AREA :IND_MSG_AREA);
DSNACICS output
DSNACICS places the return code from DSNACICS execution in the return-code
parameter. If the value of the return code is non-zero, DSNACICS puts its own error
messages and any error messages that are generated by CICS and the DSNACICX
user exit in the msg-area parameter.
The COMMAREA parameter contains the COMMAREA for the CICS server
program that DSNACICS calls. The COMMAREA parameter has a VARCHAR type.
Therefore, if the server program puts data other than character data in the
COMMAREA, that data can become corrupted by code page translation as it is
passed to the caller. To avoid code page translation, you can change the
COMMAREA parameter in the CREATE PROCEDURE statement for DSNACICS to
VARCHAR(32704) FOR BIT DATA. However, if you do so, the client program might
need to do code page translation on any character data in the COMMAREA to
make it readable.
DSNACICS debugging
If you receive errors when you call DSNACICS, ask your system administrator to
add a DSNDUMP DD statement in the startup procedure for the address space in
which DSNACICS runs. The DSNDUMP DD statement causes DB2 to generate an
SVC dump whenever DSNACICS issues an error message.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may be
used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply to
you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for this
IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
The licensed program described in this document and all licensed material available
for it are provided by IBM under terms of the IBM Customer Agreement, IBM
International Program License Agreement, or any equivalent agreement between
us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Trademarks
The following terms are trademarks of International Business Machines Corporation
in the United States, other countries, or both:
Java and all Java-based trademarks and logos are trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Notices 1039
1040 Application Programming and SQL Guide
Glossary
The following terms and abbreviations are defined after trigger. A trigger that is defined with the trigger
as they are used in the DB2 library. activation time AFTER.
APPL. A VTAM® network definition statement that is authorized program analysis report (APAR). A
used to define DB2 to VTAM as an application program report of a problem that is caused by a suspected
that uses SNA LU 6.2 protocols. defect in a current release of an IBM supplied program.
application. A program or set of programs that authorized program facility (APF). A facility that
performs a task; for example, a payroll application. permits the identification of programs that are
authorized to use restricted functions.
application-directed connection. A connection that
an application manages using the SQL CONNECT | automatic query rewrite. A process that examines an
statement. | SQL statement that refers to one or more base tables,
| and, if appropriate, rewrites the query so that it performs
application plan. The control structure that is | better. This process can also determine whether to
produced during the bind process. DB2 uses the | rewrite a query so that it refers to one or more
application plan to process SQL statements that it | materialized query tables that are derived from the
encounters during statement execution. | source tables.
application process. The unit to which resources and auxiliary index. An index on an auxiliary table in
locks are allocated. An application process involves the which each index entry refers to a LOB.
execution of one or more programs.
auxiliary table. A table that stores columns outside
application programming interface (API). A the table in which they are defined. Contrast with base
functional interface that is supplied by the operating table.
system or by a separately orderable licensed program
that allows an application program that is written in a
high-level language to use specific data or functions of B
the operating system or licensed program.
backout. The process of undoing uncommitted
application requester. The component on a remote changes that an application process made. This might
system that generates DRDA requests for data on be necessary in the event of a failure on the part of an
behalf of an application. An application requester application process, or as a result of a deadlock
accesses a DB2 database server using the DRDA situation.
application-directed protocol.
backward log recovery. The fourth and final phase of
application server. The target of a request from a restart processing during which DB2 scans the log in a
remote application. In the DB2 environment, the backward direction to apply UNDO log records for all
application server function is provided by the distributed aborted changes.
data facility and is used to access DB2 data from
remote applications. base table. (1) A table that is created by the SQL
CREATE TABLE statement and that holds persistent
archive log. The portion of the DB2 log that contains data. Contrast with result table and temporary table.
log records that have been copied from the active log. (2) A table containing a LOB column definition. The
actual LOB column data is not stored with the base
ASCII. An encoding scheme that is used to represent
table. The base table contains a row identifier for each
strings in many environments, typically on PCs and
row and an indicator column for each of its LOB
workstations. Contrast with EBCDIC and Unicode.
columns. Contrast with auxiliary table.
| ASID. Address space identifier. base table space. A table space that contains base
attachment facility. An interface between DB2 and tables.
TSO, IMS, CICS, or batch address spaces. An
basic predicate. A predicate that compares two
attachment facility allows application programs to
values.
access DB2.
basic sequential access method (BSAM). An access
attribute. A characteristic of an entity. For example, in
method for storing or retrieving data blocks in a
database design, the phone number of an employee is
continuous sequence, using either a sequential-access
one of that employee’s attributes.
or a direct-access device.
authorization ID. A string that can be verified for
| batch message processing program. In IMS, an
connection to DB2 and to which a set of privileges is
| application program that can perform batch-type
allowed. It can represent an individual, an organizational
| processing online and can access the IMS input and
group, or a function, but DB2 does not determine this
| output message queues.
representation.
before trigger. A trigger that is defined with the trigger buffer pool. Main storage that is reserved to satisfy
activation time BEFORE. the buffering requirements for one or more table spaces
or indexes.
binary integer. A basic data type that can be further
classified as small integer or large integer. built-in data type. A data type that IBM supplies.
Among the built-in data types for DB2 UDB for z/OS are
binary large object (BLOB). A sequence of bytes string, numeric, ROWID, and datetime. Contrast with
where the size of the value ranges from 0 bytes to distinct type.
2 GB−1. Such a string does not have an associated
CCSID. built-in function. A function that DB2 supplies.
Contrast with user-defined function.
binary string. A sequence of bytes that is not
associated with a CCSID. For example, the BLOB data business dimension. A category of data, such as
type is a binary string. products or time periods, that an organization might
want to analyze.
bind. The process by which the output from the SQL
precompiler is converted to a usable control structure,
often called an access plan, application plan, or C
package. During this process, access paths to the data
are selected and some authorization checking is cache structure. A coupling facility structure that
performed. The types of bind are: stores data that can be available to all members of a
automatic bind. (More correctly, automatic rebind) A Sysplex. A DB2 data sharing group uses cache
process by which SQL statements are bound structures as group buffer pools.
automatically (without a user issuing a BIND
CAF. Call attachment facility.
command) when an application process begins
execution and the bound application plan or call attachment facility (CAF). A DB2 attachment
package it requires is not valid. facility for application programs that run in TSO or z/OS
dynamic bind. A process by which SQL statements batch. The CAF is an alternative to the DSN command
are bound as they are entered. processor and provides greater control over the
incremental bind. A process by which SQL execution environment.
statements are bound during the execution of an
application process. call-level interface (CLI). A callable application
static bind. A process by which SQL statements are programming interface (API) for database access, which
bound after they have been precompiled. All static is an alternative to using embedded SQL. In contrast to
SQL statements are prepared for execution at the embedded SQL, DB2 ODBC (which is based on the CLI
same time. architecture) does not require the user to precompile or
bind applications, but instead provides a standard set of
bit data. Data that is character type CHAR or functions to process SQL statements and related
VARCHAR and is not associated with a coded character services at run time.
set.
cascade delete. The way in which DB2 enforces
BLOB. Binary large object. referential constraints when it deletes all descendent
rows of a deleted parent row.
block fetch. A capability in which DB2 can retrieve, or
fetch, a large set of rows together. Using block fetch CASE expression. An expression that is selected
can significantly reduce the number of messages that based on the evaluation of one or more conditions.
are being sent across the network. Block fetch applies
only to cursors that do not update data. cast function. A function that is used to convert
instances of a (source) data type into instances of a
BMP. Batch Message Processing (IMS). See batch different (target) data type. In general, a cast function
message processing program. has the name of the target data type. It has one single
argument whose type is the source data type; its return
bootstrap data set (BSDS). A VSAM data set that
type is the target data type.
contains name and status information for DB2, as well
as RBA range specifications, for all active and archive castout. The DB2 process of writing changed pages
log data sets. It also contains passwords for the DB2 from a group buffer pool to disk.
directory and catalog, and lists of conditional restart and
checkpoint records. castout owner. The DB2 member that is responsible
for casting out a particular page set or partition.
BSAM. Basic sequential access method.
catalog. In DB2, a collection of tables that contains
BSDS. Bootstrap data set. descriptions of objects such as tables, views, and
indexes.
Glossary 1043
catalog table • closed application
catalog table. Any table in the DB2 catalog. | statements because of rows that violate referential
| constraints, check constraints, or both.
CCSID. Coded character set identifier.
checkpoint. A point at which DB2 records internal
CDB. Communications database. status information on the DB2 log; the recovery process
uses this information if DB2 abnormally terminates.
CDRA. Character Data Representation Architecture.
| child lock. For explicit hierarchical locking, a lock that
CEC. Central electronic complex. See central | is held on either a table, page, row, or a large object
processor complex. | (LOB). Each child lock has a parent lock. See also
| parent lock.
central electronic complex (CEC). See central
processor complex. CI. Control interval.
central processor (CP). The part of the computer that | CICS. Represents (in this publication): CICS
contains the sequencing and processing facilities for | Transaction Server for z/OS: Customer Information
instruction execution, initial program load, and other | Control System Transaction Server for z/OS.
machine operations.
CICS attachment facility. A DB2 subcomponent that
central processor complex (CPC). A physical uses the z/OS subsystem interface (SSI) and
collection of hardware (such as an ES/3090™) that cross-storage linkage to process requests from CICS to
consists of main storage, one or more central DB2 and to coordinate resource commitment.
processors, timers, and channels.
CIDF. Control interval definition field.
| CFRM. Coupling facility resource management.
claim. A notification to DB2 that an object is being
CFRM policy. A declaration by a z/OS administrator accessed. Claims prevent drains from occurring until the
regarding the allocation rules for a coupling facility claim is released, which usually occurs at a commit
structure. point. Contrast with drain.
character conversion. The process of changing claim class. A specific type of object access that can
characters from one encoding scheme to another. be one of the following isolation levels:
Cursor stability (CS)
Character Data Representation Architecture
Repeatable read (RR)
(CDRA). An architecture that is used to achieve
Write
consistent representation, processing, and interchange
of string data. claim count. A count of the number of agents that are
accessing an object.
character large object (CLOB). A sequence of bytes
representing single-byte characters or a mixture of class of service. A VTAM term for a list of routes
single- and double-byte characters where the size of the through a network, arranged in an order of preference
value can be up to 2 GB−1. In general, character large for their use.
object values are used whenever a character string
might exceed the limits of the VARCHAR type. class word. A single word that indicates the nature of
a data attribute. For example, the class word PROJ
character set. A defined set of characters. indicates that the attribute identifies a project.
character string. A sequence of bytes that represent clause. In SQL, a distinct part of a statement, such as
bit data, single-byte characters, or a mixture of a SELECT clause or a WHERE clause.
single-byte and multibyte characters.
CLI. Call- level interface.
check constraint. A user-defined constraint that
specifies the values that specific columns of a base client. See requester.
table can contain.
CLIST. Command list. A language for performing TSO
check integrity. The condition that exists when each tasks.
row in a table conforms to the check constraints that are
defined on that table. Maintaining check integrity CLOB. Character large object.
requires DB2 to enforce check constraints on operations
that add or change data. closed application. An application that requires
exclusive use of certain statements on certain DB2
| check pending. A state of a table space or partition objects, so that the objects are managed solely through
| that prevents its use by some utilities and by some SQL the application’s external interface.
CLPA. Create link pack area. command recognition character (CRC). A character
that permits a z/OS console operator or an IMS
| clustering index. An index that determines how rows subsystem user to route DB2 commands to specific
| are physically ordered (clustered) in a table space. If a DB2 subsystems.
| clustering index on a partitioned table is not a
| partitioning index, the rows are ordered in cluster command scope. The scope of command operation in
| sequence within each data partition instead of spanning a data sharing group. If a command has member scope,
| partitions. Prior to Version 8 of DB2 UDB for z/OS, the the command displays information only from the one
| partitioning index was required to be the clustering member or affects only non-shared resources that are
| index. owned locally by that member. If a command has group
scope, the command displays information from all
coded character set. A set of unambiguous rules that members, affects non-shared resources that are owned
establish a character set and the one-to-one locally by all members, displays information on sharable
relationships between the characters of the set and their resources, or affects sharable resources.
coded representations.
commit. The operation that ends a unit of work by
coded character set identifier (CCSID). A 16-bit releasing locks so that the database changes that are
number that uniquely identifies a coded representation made by that unit of work can be perceived by other
of graphic characters. It designates an encoding processes.
scheme identifier and one or more pairs consisting of a
character set identifier and an associated code page commit point. A point in time when data is considered
identifier. consistent.
code page. (1) A set of assignments of characters to committed phase. The second phase of the multisite
code points. In EBCDIC, for example, the character 'A' update process that requests all participants to commit
is assigned code point X'C1' (2) , and character 'B' is the effects of the logical unit of work.
assigned code point X'C2'. Within a code page, each
code point has only one specific meaning. common service area (CSA). In z/OS, a part of the
common area that contains data areas that are
code point. In CDRA, a unique bit pattern that addressable by all address spaces.
represents a character in a code page.
communications database (CDB). A set of tables in
coexistence. During migration, the period of time in the DB2 catalog that are used to establish
which two releases exist in the same data sharing conversations with remote database management
group. systems.
cold start. A process by which DB2 restarts without comparison operator. A token (such as =, >, or <)
processing any log records. Contrast with warm start. that is used to specify a relationship between two
values.
collection. A group of packages that have the same
qualifier. composite key. An ordered set of key columns of the
same table.
column. The vertical component of a table. A column
has a name and a particular data type (for example, compression dictionary. The dictionary that controls
character, decimal, or integer). the process of compression and decompression. This
dictionary is created from the data in the table space or
column function. An operation that derives its result table space partition.
by using values from one or more rows. Contrast with
scalar function. concurrency. The shared use of resources by more
than one application process at the same time.
"come from" checking. An LU 6.2 security option
that defines a list of authorization IDs that are allowed conditional restart. A DB2 restart that is directed by a
to connect to DB2 from a partner LU. user-defined conditional restart control record (CRCR).
Glossary 1045
connection declaration clause • coupling facility resource management
connection declaration clause. In SQLJ, a statement | copy pool. A named set of SMS storage groups that
that declares a connection to a data source. | contains data that is to be copied collectively. A copy
| pool is an SMS construct that lets you define which
connection handle. The data object containing | storage groups are to be copied by using FlashCopy®
information that is associated with a connection that | functions. HSM determines which volumes belong to a
DB2 ODBC manages. This includes general status | copy pool.
information, transaction status, and diagnostic
information. | copy target. A named set of SMS storage groups that
| are to be used as containers for copy pool volume
connection ID. An identifier that is supplied by the | copies. A copy target is an SMS construct that lets you
attachment facility and that is associated with a specific | define which storage groups are to be used as
address space connection. | containers for volumes that are copied by using
| FlashCopy functions.
consistency token. A timestamp that is used to
generate the version identifier for an application. See | copy version. A point-in-time FlashCopy copy that is
also version. | managed by HSM. Each copy pool has a version
| parameter that specifies how many copy versions are
constant. A language element that specifies an | maintained on disk.
unchanging value. Constants are classified as string
constants or numeric constants. Contrast with variable. correlated columns. A relationship between the value
of one column and the value of another column.
constraint. A rule that limits the values that can be
inserted, deleted, or updated in a table. See referential correlated subquery. A subquery (part of a WHERE
constraint, check constraint, and unique constraint. or HAVING clause) that is applied to a row or group of
rows of a table or view that is named in an outer
context. The application’s logical connection to the subselect statement.
data source and associated internal DB2 ODBC
connection information that allows the application to correlation ID. An identifier that is associated with a
direct its operations to a data source. A DB2 ODBC specific thread. In TSO, it is either an authorization ID
context represents a DB2 thread. or the job name.
contracting conversion. A process that occurs when correlation name. An identifier that designates a
the length of a converted string is smaller than that of table, a view, or individual rows of a table or view within
the source string. For example, this process occurs a single SQL statement. It can be defined in any FROM
when an EBCDIC mixed-data string that contains DBCS clause or in the first clause of an UPDATE or DELETE
characters is converted to ASCII mixed data; the statement.
converted string is shorter because of the removal of
the shift codes. cost category. A category into which DB2 places cost
estimates for SQL statements at the time the statement
control interval (CI). A fixed-length area or disk in is bound. A cost estimate can be placed in either of the
which VSAM stores records and creates distributed free following cost categories:
space. Also, in a key-sequenced data set or file, the set v A: Indicates that DB2 had enough information to
of records that an entry in the sequence-set index make a cost estimate without using default values.
record points to. The control interval is the unit of v B: Indicates that some condition exists for which DB2
information that VSAM transmits to or from disk. A was forced to use default values for its estimate.
control interval always includes an integral number of
physical records. The cost category is externalized in the
COST_CATEGORY column of the
control interval definition field (CIDF). In VSAM, a DSN_STATEMNT_TABLE when a statement is
field that is located in the 4 bytes at the end of each explained.
control interval; it describes the free space, if any, in the
control interval. coupling facility. A special PR/SM™ LPAR logical
partition that runs the coupling facility control program
conversation. Communication, which is based on LU and provides high-speed caching, list processing, and
6.2 or Advanced Program-to-Program Communication locking functions in a Parallel Sysplex®.
(APPC), between an application and a remote
transaction program over an SNA logical unit-to-logical | coupling facility resource management. A
unit (LU-LU) session that allows communication while | component of z/OS that provides the services to
processing a transaction. | manage coupling facility resources in a Parallel Sysplex.
| This management includes the enforcement of CFRM
coordinator. The system component that coordinates | policies to ensure that the coupling facility and structure
the commit or rollback of a unit of work that includes | requirements are satisfied.
work that is done on one or more other systems.
CP. Central processor. current status rebuild. The second phase of restart
processing during which the status of the subsystem is
CPC. Central processor complex. reconstructed from information on the log.
C++ member. A data object or function in a structure, cursor. A named control structure that an application
union, or class. program uses to point to a single row or multiple rows
within some ordered set of rows of a result table. A
C++ member function. An operator or function that is cursor can be used to retrieve, update, or delete rows
declared as a member of a class. A member function from a result table.
has access to the private and protected data members
and to the member functions of objects in its class. cursor sensitivity. The degree to which database
Member functions are also called methods. updates are visible to the subsequent FETCH
statements in a cursor. A cursor can be sensitive to
C++ object. (1) A region of storage. An object is changes that are made with positioned update and
created when a variable is defined or a new function is delete statements specifying the name of that cursor. A
invoked. (2) An instance of a class. cursor can also be sensitive to changes that are made
with searched update or delete statements, or with
CRC. Command recognition character.
cursors other than this cursor. These changes can be
CRCR. Conditional restart control record. See also made by this application process or by another
conditional restart. application process.
create link pack area (CLPA). An option that is used cursor stability (CS). The isolation level that provides
during IPL to initialize the link pack pageable area. maximum concurrency without the ability to read
uncommitted data. With cursor stability, a unit of work
created temporary table. A table that holds temporary holds locks only on its uncommitted changes and on the
data and is defined with the SQL statement CREATE current row of each of its cursors.
GLOBAL TEMPORARY TABLE. Information about
created temporary tables is stored in the DB2 catalog, cursor table (CT). The copy of the skeleton cursor
so this kind of table is persistent and can be shared table that is used by an executing application process.
across application processes. Contrast with declared
cycle. A set of tables that can be ordered so that each
temporary table. See also temporary table.
table is a descendent of the one before it, and the first
cross-memory linkage. A method for invoking a table is a descendent of the last table. A self-referencing
program in a different address space. The invocation is table is a cycle with a single member.
synchronous with respect to the caller.
Glossary 1047
database descriptor (DBD) • DBD
| SQL statements are not modified and are sent data mining. The process of collecting critical
| unchanged to the database server. business information from a data warehouse, correlating
it, and uncovering associations, patterns, and trends.
| database descriptor (DBD). An internal
| representation of a DB2 database definition, which data partition. A VSAM data set that is contained
| reflects the data definition that is in the DB2 catalog. within a partitioned table space.
| The objects that are defined in a database descriptor
| are table spaces, tables, indexes, index spaces, data-partitioned secondary index (DPSI). A
| relationships, check constraints, and triggers. A DBD secondary index that is partitioned. The index is
| also contains information about accessing tables in the partitioned according to the underlying data.
| database.
data sharing. The ability of two or more DB2
database exception status. An indication that subsystems to directly access and change a single set
something is wrong with a database. All members of a of data.
data sharing group must know and share the exception
status of databases. data sharing group. A collection of one or more DB2
subsystems that directly access and change the same
| database identifier (DBID). An internal identifier of the data while maintaining data integrity.
| database.
data sharing member. A DB2 subsystem that is
database management system (DBMS). A software assigned by XCF services to a data sharing group.
system that controls the creation, organization, and
modification of a database and the access to the data data source. A local or remote relational or
that is stored within it. non-relational data manager that is capable of
supporting data access via an ODBC driver that
database request module (DBRM). A data set supports the ODBC APIs. In the case of DB2 UDB for
member that is created by the DB2 precompiler and that z/OS, the data sources are always relational database
contains information about SQL statements. DBRMs are managers.
used in the bind process.
| data space. In releases prior to DB2 UDB for z/OS,
database server. The target of a request from a local | Version 8, a range of up to 2 GB of contiguous virtual
application or an intermediate database server. In the | storage addresses that a program can directly
DB2 environment, the database server function is | manipulate. Unlike an address space, a data space can
provided by the distributed data facility to access DB2 | hold only data; it does not contain common areas,
data from local applications, or from a remote database | system data, or programs.
server that acts as an intermediate database server.
data type. An attribute of columns, literals, host
data currency. The state in which data that is variables, special registers, and the results of functions
retrieved into a host variable in your program is a copy and expressions.
of data in the base table.
data warehouse. A system that provides critical
data definition name (ddname). The name of a data business information to an organization. The data
definition (DD) statement that corresponds to a data warehouse system cleanses the data for accuracy and
control block containing the same name. currency, and then presents the data to decision makers
so that they can interpret and use it effectively and
data dictionary. A repository of information about an efficiently.
organization’s application programs, databases, logical
data models, users, and authorizations. A data date. A three-part value that designates a day, month,
dictionary can be manual or automated. and year.
data-driven business rules. Constraints on particular date duration. A decimal integer that represents a
data values that exist as a result of requirements of the number of years, months, and days.
business.
datetime value. A value of the data type DATE, TIME,
Data Language/I (DL/I). The IMS data manipulation or TIMESTAMP.
language; a common high-level interface between a
user application and IMS. DBA. Database administrator.
data mart. A small data warehouse that applies to a DBCLOB. Double-byte character large object.
single department or team. See also data warehouse.
DBCS. Double-byte character set.
DBRM. Database request module. deferred embedded SQL. SQL statements that are
neither fully static nor fully dynamic. Like static
DB2 catalog. Tables that are maintained by DB2 and statements, they are embedded within an application,
contain descriptions of DB2 objects, such as tables, but like dynamic statements, they are prepared during
views, and indexes. the execution of the application.
DB2 command. An instruction to the DB2 subsystem deferred write. The process of asynchronously writing
that a user enters to start or stop DB2, to display changed data pages to disk.
information on current users, to start or stop databases,
to display information on the status of databases, and degree of parallelism. The number of concurrently
so on. executed operations that are initiated to process a
query.
DB2 for VSE & VM. The IBM DB2 relational database
management system for the VSE and VM operating delete-connected. A table that is a dependent of table
systems. P or a dependent of a table to which delete operations
from table P cascade.
DB2I. DB2 Interactive.
delete hole. The location on which a cursor is
DB2 Interactive (DB2I). The DB2 facility that provides positioned when a row in a result table is refetched and
for the execution of SQL statements, DB2 (operator) the row no longer exists on the base table, because
commands, programmer commands, and utility another cursor deleted the row between the time the
invocation. cursor first included the row in the result table and the
time the cursor tried to refetch it.
DB2I Kanji Feature. The tape that contains the panels
and jobs that allow a site to display DB2I panels in delete rule. The rule that tells DB2 what to do to a
Kanji. dependent row when a parent row is deleted. For each
relationship, the rule might be CASCADE, RESTRICT,
DB2 PM. DB2 Performance Monitor.
SET NULL, or NO ACTION.
DB2 thread. The DB2 structure that describes an
delete trigger. A trigger that is defined with the
application’s connection, traces its progress, processes
triggering SQL operation DELETE.
resource functions, and delimits its accessibility to DB2
resources and services. delimited identifier. A sequence of characters that are
enclosed within double quotation marks ("). The
DCLGEN. Declarations generator.
sequence must consist of a letter followed by zero or
DDF. Distributed data facility. more characters, each of which is a letter, digit, or the
underscore character (_).
ddname. Data definition name.
delimiter token. A string constant, a delimited
deadlock. Unresolvable contention for the use of a identifier, an operator symbol, or any of the special
resource, such as a table or an index. characters that are shown in DB2 syntax diagrams.
declarations generator (DCLGEN). A subcomponent denormalization. A key step in the task of building a
of DB2 that generates SQL table declarations and physical relational database design. Denormalization is
COBOL, C, or PL/I data structure declarations that the intentional duplication of columns in multiple tables,
conform to the table. The declarations are generated and the consequence is increased data redundancy.
from DB2 system catalog information. DCLGEN is also Denormalization is sometimes necessary to minimize
a DSN subcommand. performance problems. Contrast with normalization.
declared temporary table. A table that holds dependent. An object (row, table, or table space) that
temporary data and is defined with the SQL statement has at least one parent. The object is also said to be a
DECLARE GLOBAL TEMPORARY TABLE. Information dependent (row, table, or table space) of its parent. See
about declared temporary tables is not stored in the also parent row, parent table, parent table space.
DB2 catalog, so this kind of table is not persistent and
can be used only by the application process that issued dependent row. A row that contains a foreign key that
the DECLARE statement. Contrast with created matches the value of a primary key in the parent row.
temporary table. See also temporary table.
dependent table. A table that is a dependent in at
least one referential constraint.
Glossary 1049
DES-based authenticator • DRDA access
directory. The DB2 system database that contains downstream. The set of nodes in the syncpoint tree
internal objects such as database descriptors and that is connected to the local DBMS as a participant in
skeleton cursor tables. the execution of a two-phase commit.
distinct type. A user-defined data type that is | DPSI. Data-partitioned secondary index.
internally represented as an existing type (its source
type), but is considered to be a separate and drain. The act of acquiring a locked resource by
incompatible type for semantic purposes. quiescing access to that object.
distributed data. Data that resides on a DBMS other drain lock. A lock on a claim class that prevents a
than the local system. claim from occurring.
distributed data facility (DDF). A set of DB2 DRDA. Distributed Relational Database Architecture.
components through which DB2 communicates with
another relational database management system. DRDA access. An open method of accessing
distributed data that you can use to can connect to
another database server to execute packages that were
previously bound at the server location. You use the
SQL CONNECT statement or an SQL statement with a encoding scheme. A set of rules to represent
three-part name to identify the server. Contrast with character data (ASCII, EBCDIC, or Unicode).
private protocol access.
entity. A significant object of interest to an
DSN. (1) The default DB2 subsystem name. (2) The organization.
name of the TSO command processor of DB2. (3) The
first three characters of DB2 module and macro names. enumerated list. A set of DB2 objects that are defined
with a LISTDEF utility control statement in which
duration. A number that represents an interval of time. pattern-matching characters (*, %, _ or ?) are not used.
See also date duration, labeled duration, and time
duration. environment. A collection of names of logical and
physical resources that are used to support the
| dynamic cursor. A named control structure that an performance of a function.
| application program uses to change the size of the
| result table and the order of its rows after the cursor is environment handle. In DB2 ODBC, the data object
| opened. Contrast with static cursor. that contains global information regarding the state of
the application. An environment handle must be
dynamic dump. A dump that is issued during the allocated before a connection handle can be allocated.
execution of a program, usually under the control of that Only one environment handle can be allocated per
program. application.
dynamic SQL. SQL statements that are prepared and EOM. End of memory.
executed within an application program while the
program is executing. In dynamic SQL, the SQL source EOT. End of task.
is contained in host language variables rather than
being coded into the application program. The SQL equijoin. A join operation in which the join-condition
statement can change several times during the has the form expression = expression.
application program’s execution.
error page range. A range of pages that are
| dynamic statement cache pool. A cache, located considered to be physically damaged. DB2 does not
| above the 2-GB storage line, that holds dynamic allow users to access any pages that fall within this
| statements. range.
| EB. See exabyte. ESMT. External subsystem module table (in IMS).
EBCDIC. Extended binary coded decimal interchange EUR. IBM European Standards.
code. An encoding scheme that is used to represent
character data in the z/OS, VM, VSE, and iSeries™
| exabyte. For processor, real and virtual storage
environments. Contrast with ASCII and Unicode.
| capacities and channel volume:
| 1 152 921 504 606 846 976 bytes or 260.
e-business. The transformation of key business
exception table. A table that holds rows that violate
processes through the use of Internet technologies.
referential constraints or check constraints that the
| EDM pool. A pool of main storage that is used for CHECK DATA utility finds.
| database descriptors, application plans, authorization
exclusive lock. A lock that prevents concurrently
| cache, application packages.
executing application processes from reading or
EID. Event identifier. changing data. Contrast with share lock.
embedded SQL. SQL statements that are coded executable statement. An SQL statement that can be
within an application program. See static SQL. embedded in an application program, dynamically
prepared and executed, or issued interactively.
enclave. In Language Environment , an independent
collection of routines, one of which is designated as the execution context. In SQLJ, a Java object that can
main routine. An enclave is similar to a program or run be used to control the execution of SQL statements.
unit.
Glossary 1051
exit routine • forest
forget. In a two-phase commit operation, (1) the vote function signature. The logical concatenation of a
that is sent to the prepare phase when the participant fully qualified function name with the data types of all of
has not modified any data. The forget vote allows a its parameters.
participant to release locks and forget about the logical
unit of work. This is also referred to as the read-only
vote. (2) The response to the committed request in the
G
second phase of the operation.
GB. Gigabyte (1 073 741 824 bytes).
forward log recovery. The third phase of restart
GBP. Group buffer pool.
processing during which DB2 processes the log in a
forward direction to apply all REDO log records. GBP-dependent. The status of a page set or page set
partition that is dependent on the group buffer pool.
free space. The total amount of unused space in a
Either read/write interest is active among DB2
page; that is, the space that is not used to store records
subsystems for this page set, or the page set has
or control information is free space.
changed pages in the group buffer pool that have not
full outer join. The result of a join operation that yet been cast out to disk.
includes the matched rows of both tables that are being
generalized trace facility (GTF). A z/OS service
joined and preserves the unmatched rows of both
program that records significant system events such as
tables. See also join.
I/O interrupts, SVC interrupts, program interrupts, or
fullselect. A subselect, a values-clause, or a number external interrupts.
of both that are combined by set operators. Fullselect
generic resource name. A name that VTAM uses to
specifies a result table. If UNION is not used, the result
represent several application programs that provide the
of the fullselect is the result of the specified subselect.
same function in order to handle session distribution
| fully escaped mapping. A mapping from an SQL and balancing in a Sysplex environment.
| identifier to an XML name when the SQL identifier is a
getpage. An operation in which DB2 accesses a data
| column name.
page.
function. A mapping, which is embodied as a program
global lock. A lock that provides concurrency control
(the function body) that is invocable by means of zero
within and among DB2 subsystems. The scope of the
or more input values (arguments) to a single value (the
lock is across all DB2 subsystems of a data sharing
result). See also column function and scalar function.
group.
Functions can be user-defined, built-in, or generated by
DB2. (See also built-in function, cast function, external global lock contention. Conflicts on locking requests
function, sourced function, SQL function, and between different DB2 members of a data sharing group
user-defined function.) when those members are trying to serialize shared
resources.
function definer. The authorization ID of the owner of
the schema of the function that is specified in the governor. See resource limit facility.
CREATE FUNCTION statement.
graphic string. A sequence of DBCS characters.
function implementer. The authorization ID of the
owner of the function program and function package. gross lock. The shared, update, or exclusive mode
locks on a table, partition, or table space.
function package. A package that results from binding
the DBRM for a function program. group buffer pool (GBP). A coupling facility cache
structure that is used by a data sharing group to cache
function package owner. The authorization ID of the data and to ensure that the data is consistent for all
user who binds the function program’s DBRM into a members.
function package.
group buffer pool duplexing. The ability to write data
function resolution. The process, internal to the to two instances of a group buffer pool structure: a
DBMS, by which a function invocation is bound to a primary group buffer pool and a secondary group buffer
particular function instance. This process uses the pool. z/OS publications refer to these instances as the
function name, the data types of the arguments, and a "old" (for primary) and "new" (for secondary) structures.
list of the applicable schema names (called the SQL
path) to make the selection. This process is sometimes group level. The release level of a data sharing
called function selection. group, which is established when the first member
migrates to a new release.
function selection. See function resolution.
Glossary 1053
group name • IMS
group name. The z/OS XCF identifier for a data | host variable array. An array of elements, each of
sharing group. | which corresponds to a value for a column. The
| dimension of the array determines the maximum
group restart. A restart of at least one member of a | number of rows for which the array can be used.
data sharing group after the loss of either locks or the
shared communications area. HSM. Hierarchical storage manager.
heuristic decision. A decision that forces indoubt identify. A request that an attachment service program
resolution at a participant by means other than in an address space that is separate from DB2 issues
automatic resynchronization between coordinator and thorough the z/OS subsystem interface to inform DB2 of
participant. its existence and to initiate the process of becoming
connected to DB2.
| hole. A row of the result table that cannot be accessed
| because of a delete or an update that has been identity column. A column that provides a way for
| performed on the row. See also delete hole and update DB2 to automatically generate a numeric value for each
| hole. row. The generated values are unique if cycling is not
used. Identity columns are defined with the AS
home address space. The area of storage that z/OS IDENTITY clause. Uniqueness of values can be
currently recognizes as dispatched. ensured by defining a unique index that contains only
the identity column. A table can have no more than one
host. The set of programs and resources that are identity column.
available on a given TCP/IP instance.
IFCID. Instrumentation facility component identifier.
host expression. A Java variable or expression that is
referenced by SQL clauses in an SQLJ application IFI. Instrumentation facility interface.
program.
IFI call. An invocation of the instrumentation facility
host identifier. A name that is declared in the host interface (IFI) by means of one of its defined functions.
program.
IFP. IMS Fast Path.
host language. A programming language in which you
can embed SQL statements. image copy. An exact reproduction of all or part of a
table space. DB2 provides utility programs to make full
host program. An application program that is written image copies (to copy the entire table space) or
in a host language and that contains embedded SQL incremental image copies (to copy only those pages
statements. that have been modified since the last image copy).
host structure. In an application program, a structure implied forget. In the presumed-abort protocol, an
that is referenced by embedded SQL statements. implied response of forget to the second-phase
committed request from the coordinator. The response
host variable. In an application program, an is implied when the participant responds to any
application variable that is referenced by embedded subsequent request from the coordinator.
SQL statements.
IMS. Information Management System.
IMS attachment facility. A DB2 subcomponent that indoubt resolution. The process of resolving the
uses z/OS subsystem interface (SSI) protocols and status of an indoubt logical unit of work to either the
cross-memory linkage to process requests from IMS to committed or the rollback state.
DB2 and to coordinate resource commitment.
inflight. A status of a unit of recovery. If DB2 fails
IMS DB. Information Management System Database. before its unit of recovery completes phase 1 of the
commit process, it merely backs out the updates of its
IMS TM. Information Management System Transaction unit of recovery at restart. These units of recovery are
Manager. termed inflight.
in-abort. A status of a unit of recovery. If DB2 fails inheritance. The passing downstream of class
after a unit of recovery begins to be rolled back, but resources or attributes from a parent class in the class
before the process is completed, DB2 continues to back hierarchy to a child class.
out the changes during restart.
initialization file. For DB2 ODBC applications, a file
in-commit. A status of a unit of recovery. If DB2 fails containing values that can be set to adjust the
after beginning its phase 2 commit processing, it performance of the database manager.
"knows," when restarted, that changes made to data are
consistent. Such units of recovery are termed in-commit. inline copy. A copy that is produced by the LOAD or
REORG utility. The data set that the inline copy
independent. An object (row, table, or table space) produces is logically equivalent to a full image copy that
that is neither a parent nor a dependent of another is produced by running the COPY utility with read-only
object. access (SHRLEVEL REFERENCE).
index. A set of pointers that are logically ordered by inner join. The result of a join operation that includes
the values of a key. Indexes can provide faster access only the matched rows of both tables that are being
to data and can enforce uniqueness on the rows in a joined. See also join.
table.
inoperative package. A package that cannot be used
| index-controlled partitioning. A type of partitioning in because one or more user-defined functions or
| which partition boundaries for a partitioned table are procedures that the package depends on were dropped.
| controlled by values that are specified on the CREATE Such a package must be explicitly rebound. Contrast
| INDEX statement. Partition limits are saved in the with invalid package.
| LIMITKEY column of the SYSIBM.SYSINDEXPART
| catalog table. | insensitive cursor. A cursor that is not sensitive to
| inserts, updates, or deletes that are made to the
index key. The set of columns in a table that is used | underlying rows of a result table after the result table
to determine the order of index entries. | has been materialized.
index partition. A VSAM data set that is contained insert trigger. A trigger that is defined with the
within a partitioning index space. triggering SQL operation INSERT.
index space. A page set that is used to store the install. The process of preparing a DB2 subsystem to
entries of one index. operate as a z/OS subsystem.
indicator column. A 4-byte value that is stored in a installation verification scenario. A sequence of
base table in place of a LOB column. operations that exercises the main DB2 functions and
tests whether DB2 was correctly installed.
indicator variable. A variable that is used to represent
the null value in an application program. If the value for instrumentation facility component identifier
the selected column is null, a negative value is placed (IFCID). A value that names and identifies a trace
in the indicator variable. record of an event that can be traced. As a parameter
on the START TRACE and MODIFY TRACE
indoubt. A status of a unit of recovery. If DB2 fails commands, it specifies that the corresponding event is
after it has finished its phase 1 commit processing and to be traced.
before it has started phase 2, only the commit
coordinator knows if an individual unit of recovery is to instrumentation facility interface (IFI). A
be committed or rolled back. At emergency restart, if programming interface that enables programs to obtain
DB2 lacks the information it needs to make this online trace data about DB2, to submit DB2 commands,
decision, the status of the unit of recovery is indoubt and to pass data to DB2.
until DB2 obtains this information from the coordinator.
More than one unit of recovery can be indoubt at
restart.
Glossary 1055
Interactive System Productivity Facility (ISPF) • Kerberos ticket
Interactive System Productivity Facility (ISPF). An iterator. In SQLJ, an object that contains the result set
IBM licensed program that provides interactive dialog of a query. An iterator is equivalent to a cursor in other
services in a z/OS environment. host languages.
inter-DB2 R/W interest. A property of data in a table iterator declaration clause. In SQLJ, a statement that
space, index, or partition that has been opened by more generates an iterator declaration class. An iterator is an
than one member of a data sharing group and that has object of an iterator declaration class.
been opened for writing by at least one of those
members.
J
intermediate database server. The target of a
request from a local application or a remote application | Japanese Industrial Standard. An encoding scheme
requester that is forwarded to another database server. | that is used to process Japanese characters.
In the DB2 environment, the remote request is
| JAR. Java Archive.
forwarded transparently to another database server if
the object that is referenced by a three-part name does Java Archive (JAR). A file format that is used for
not reference the local location. aggregating many files into a single file.
internationalization. The support for an encoding JCL. Job control language.
scheme that is able to represent the code points of
characters from many different geographies and JDBC. A Sun Microsystems database application
languages. To support all geographies, the Unicode programming interface (API) for Java that allows
standard requires more than 1 byte to represent a programs to access database management systems by
single character. See also Unicode. using callable SQL. JDBC does not require the use of
an SQL preprocessor. In addition, JDBC provides an
internal resource lock manager (IRLM). A z/OS architecture that lets users add modules called
subsystem that DB2 uses to control communication and database drivers, which link the application to their
database locking. choice of database management systems at run time.
| International Organization for Standardization. An JES. Job Entry Subsystem.
| international body charged with creating standards to
| facilitate the exchange of goods and services as well as JIS. Japanese Industrial Standard.
| cooperation in intellectual, scientific, technological, and
| economic activity. job control language (JCL). A control language that
is used to identify a job to an operating system and to
invalid package. A package that depends on an describe the job’s requirements.
object (other than a user-defined function) that is
dropped. Such a package is implicitly rebound on Job Entry Subsystem (JES). An IBM licensed
invocation. Contrast with inoperative package. program that receives jobs into the system and
processes all output data that is produced by the jobs.
invariant character set. (1) A character set, such as
the syntactic character set, whose code point join. A relational operation that allows retrieval of data
assignments do not change from code page to code from two or more tables based on matching column
page. (2) A minimum set of characters that is available values. See also equijoin, full outer join, inner join, left
as part of all character sets. outer join, outer join, and right outer join.
ISO. International Organization for Standardization. Kerberos. A network authentication protocol that is
designed to provide strong authentication for
isolation level. The degree to which a unit of work is client/server applications by using secret-key
isolated from the updating operations of other units of cryptography.
work. See also cursor stability, read stability, repeatable
read, and uncommitted read. Kerberos ticket. A transparent application mechanism
that transmits the identity of an initiating principal to its
ISPF. Interactive System Productivity Facility. target. A simple ticket contains the principal’s identity, a
session key, a timestamp, and other information, which
ISPF/PDF. Interactive System Productivity is sealed using the target’s secret key.
Facility/Program Development Facility.
key. A column or an ordered collection of columns that list. A type of object, which DB2 utilities can process,
is identified in the description of a table, index, or that identifies multiple table spaces, multiple index
referential constraint. The same column can be part of spaces, or both. A list is defined with the LISTDEF utility
more than one key. control statement.
key-sequenced data set (KSDS). A VSAM file or data list structure. A coupling facility structure that lets
set whose records are loaded in key sequence and data be shared and manipulated as elements of a
controlled by an index. queue.
keyword. In SQL, a name that identifies an option that LLE. Load list element.
is used in an SQL statement.
L-lock. Logical lock.
KSDS. Key-sequenced data set.
| load list element. A z/OS control block that controls
| the loading and deleting of a particular load module
L | based on entry point names.
labeled duration. A number that represents a duration load module. A program unit that is suitable for
of years, months, days, hours, minutes, seconds, or loading into main storage for execution. The output of a
microseconds. linkage editor.
latch. A DB2 internal mechanism for controlling LOB table space. A table space in an auxiliary table
concurrent events or the use of system resources. that contains all the data for a particular LOB column in
the related base table.
LCID. Log control interval definition.
local. A way of referring to any object that the local
LDS. Linear data set. DB2 subsystem maintains. A local table, for example, is
a table that is maintained by the local DB2 subsystem.
leaf page. A page that contains pairs of keys and Contrast with remote.
RIDs and that points to actual data. Contrast with
nonleaf page. locale. The definition of a subset of a user’s
environment that combines a CCSID and characters
left outer join. The result of a join operation that that are defined for a specific language and country.
includes the matched rows of both tables that are being
joined, and that preserves the unmatched rows of the local lock. A lock that provides intra-DB2 concurrency
first table. See also join. control, but not inter-DB2 concurrency control; that is, its
scope is a single DB2.
limit key. The highest value of the index key for a
partition. local subsystem. The unique relational DBMS to
which the user or application program is directly
linear data set (LDS). A VSAM data set that contains connected (in the case of DB2, by one of the DB2
data but no control information. A linear data set can be attachment facilities).
accessed as a byte-addressable string in virtual storage.
| location. The unique name of a database server. An
linkage editor. A computer program for creating load | application uses the location name to access a DB2
modules from one or more object modules or load | database server. A database alias can be used to
modules by resolving cross references among the | override the location name when accessing a remote
modules and, if necessary, adjusting addresses. | server.
link-edit. The action of creating a loadable computer | location alias. Another name by which a database
program using a linkage editor. | server identifies itself in the network. Applications can
| use this name to access a DB2 database server.
Glossary 1057
lock • LUW
lock. A means of controlling concurrent events or logically complete. A state in which the concurrent
access to data. DB2 locking is performed by the IRLM. copy process is finished with the initialization of the
target objects that are being copied. The target objects
lock duration. The interval over which a DB2 lock is are available for update.
held.
logical page list (LPL). A list of pages that are in
lock escalation. The promotion of a lock from a row, error and that cannot be referenced by applications until
page, or LOB lock to a table space lock because the the pages are recovered. The page is in logical error
number of page locks that are concurrently held on a because the actual media (coupling facility or disk)
given resource exceeds a preset limit. might not contain any errors. Usually a connection to
the media has been lost.
locking. The process by which the integrity of data is
ensured. Locking prevents concurrent users from logical partition. A set of key or RID pairs in a
accessing inconsistent data. nonpartitioning index that are associated with a
particular partition.
lock mode. A representation for the type of access
that concurrently running programs can have to a logical recovery pending (LRECP). The state in
resource that a DB2 lock is holding. which the data and the index keys that reference the
data are inconsistent.
lock object. The resource that is controlled by a DB2
lock. logical unit (LU). An access point through which an
application program accesses the SNA network in order
lock promotion. The process of changing the size or to communicate with another application program.
mode of a DB2 lock to a higher, more restrictive level.
logical unit of work (LUW). The processing that a
lock size. The amount of data that is controlled by a program performs between synchronization points.
DB2 lock on table data; the value can be a row, a page,
a LOB, a partition, a table, or a table space. logical unit of work identifier (LUWID). A name that
uniquely identifies a thread within a network. This name
lock structure. A coupling facility data structure that is consists of a fully-qualified LU network name, an LUW
composed of a series of lock entries to support shared instance number, and an LUW sequence number.
and exclusive locking for logical resources.
log initialization. The first phase of restart processing
log. A collection of records that describe the events during which DB2 attempts to locate the current end of
that occur during DB2 execution and that indicate their the log.
sequence. The information thus recorded is used for
recovery in the event of a failure during DB2 execution. log record header (LRH). A prefix, in every logical
record, that contains control information.
| log control interval definition. A suffix of the physical
| log record that tells how record segments are placed in log record sequence number (LRSN). A unique
| the physical control interval. identifier for a log record that is associated with a data
sharing member. DB2 uses the LRSN for recovery in
logical claim. A claim on a logical partition of a the data sharing environment.
nonpartitioning index.
log truncation. A process by which an explicit starting
logical data modeling. The process of documenting RBA is established. This RBA is the point at which the
the comprehensive business information requirements in next byte of log data is to be written.
an accurate and consistent format. Data modeling is the
first task of designing a database. LPL. Logical page list.
logical drain. A drain on a logical partition of a LRECP. Logical recovery pending.
nonpartitioning index.
LRH. Log record header.
logical index partition. The set of all keys that
reference the same data partition. LRSN. Log record sequence number.
logical lock (L-lock). The lock type that transactions LU. Logical unit.
use to control intra- and inter-DB2 data concurrency
between transactions. Contrast with physical lock LU name. Logical unit name, which is the name by
(P-lock). which VTAM refers to a node in a network. Contrast
with location name.
LUWID. Logical unit of work identifier. modeling database. A DB2 database that you create
on your workstation that you use to model a DB2 UDB
for z/OS subsystem, which can then be evaluated by
M the Index Advisor.
mapping table. A table that the REORG utility uses to mode name. A VTAM name for the collection of
map the associations of the RIDs of data records in the physical and logical characteristics and attributes of a
original copy and in the shadow copy. This table is session.
created by the user.
modify locks. An L-lock or P-lock with a MODIFY
mass delete. The deletion of all rows of a table. attribute. A list of these active locks is kept at all times
in the coupling facility lock structure. If the requesting
master terminal. The IMS logical terminal that has
DB2 subsystem fails, that DB2 subsystem’s modify
complete control of IMS resources during online
locks are converted to retained locks.
operations.
MPP. Message processing program (in IMS).
master terminal operator (MTO). See master
terminal. MTO. Master terminal operator.
materialize. (1) The process of putting rows from a multibyte character set (MBCS). A character set that
view or nested table expression into a work file for represents single characters with more than a single
additional processing by a query. byte. Contrast with single-byte character set and
(2) The placement of a LOB value into contiguous double-byte character set. See also Unicode.
storage. Because LOB values can be very large, DB2
avoids materializing LOB data until doing so becomes multidimensional analysis. The process of assessing
absolutely necessary. and evaluating an enterprise on more than one level.
| materialized query table. A table that is used to Multiple Virtual Storage. An element of the z/OS
| contain information that is derived and can be operating system. This element is also called the Base
| summarized from one or more source tables. Control Program (BCP).
MB. Megabyte (1 048 576 bytes). multisite update. Distributed relational database
processing in which data is updated in more than one
MBCS. Multibyte character set. UTF-8 is an example location within a single unit of work.
of an MBCS. Characters in UTF-8 can range from 1 to
4 bytes in DB2. multithreading. Multiple TCBs that are executing one
copy of DB2 ODBC code concurrently (sharing a
member name. The z/OS XCF identifier for a processor) or in parallel (on separate central
particular DB2 subsystem in a data sharing group. processors).
menu. A displayed list of available functions for must-complete. A state during DB2 processing in
selection by the operator. A menu is sometimes called a which the entire operation must be completed to
menu panel. maintain data integrity.
| metalanguage. A language that is used to create mutex. Pthread mutual exclusion; a lock. A Pthread
| other specialized languages. mutex variable is used as a locking mechanism to allow
serialization of critical sections of code by temporarily
migration. The process of converting a subsystem blocking the execution of all but one thread.
with a previous release of DB2 to an updated or current
release. In this process, you can acquire the functions | MVS. See Multiple Virtual Storage.
of the updated or current release without losing the data
that you created on the previous release.
N
mixed data string. A character string that can contain
both single-byte and double-byte characters. negotiable lock. A lock whose mode can be
downgraded, by agreement among contending users, to
MLPA. Modified link pack area. be compatible to all. A physical lock is an example of a
negotiable lock.
MODEENT. A VTAM macro instruction that associates
a logon mode name with a set of parameters nested table expression. A fullselect in a FROM
representing session protocols. A set of MODEENT clause (surrounded by parentheses).
macro instructions defines a logon mode table.
Glossary 1059
network identifier (NID) • overloaded function
NRE. Network recovery element. originating task. In a parallel group, the primary agent
that receives data from other execution units (referred to
NUL. The null character (’\0’), which is represented by as parallel tasks) that are executing portions of the
the value X'00'. In C, this character denotes the end of query in parallel.
a string.
OS/390. Operating System/390.
null. A special value that indicates the absence of
information. OS/390 OpenEdition® Distributed Computing
Environment (OS/390 OE DCE). A set of technologies
NULLIF. A scalar function that evaluates two passed that are provided by the Open Software Foundation to
expressions, returning either NULL if the arguments are implement distributed computing.
equal or the value of the first argument if they are not.
outer join. The result of a join operation that includes
null-terminated host variable. A varying-length host the matched rows of both tables that are being joined
variable in which the end of the data is indicated by a and preserves some or all of the unmatched rows of the
null terminator. tables that are being joined. See also join.
null terminator. In C, the value that indicates the end overloaded function. A function name for which
of a string. For EBCDIC, ASCII, and Unicode UTF-8 multiple function instances exist.
strings, the null terminator is a single-byte value (X'00').
For Unicode UCS-2 (wide) strings, the null terminator is
a double-byte value (X'0000').
Glossary 1061
partitioning index • primary authorization ID
| partitioning index. An index in which the leftmost policy. See CFRM policy.
| columns are the partitioning columns of the table. The
| index can be partitioned or nonpartitioned. Portable Operating System Interface (POSIX). The
IEEE operating system interface standard, which
| partition pruning. The removal from consideration of defines the Pthread standard of threading. See also
| inapplicable partitions through setting up predicates in a Pthread.
| query on a partitioned table to access only certain
| partitions to satisfy the query. POSIX. Portable Operating System Interface.
partner logical unit. An access point in the SNA postponed abort UR. A unit of recovery that was
network that is connected to the local DB2 subsystem inflight or in-abort, was interrupted by system failure or
by way of a VTAM conversation. cancellation, and did not complete backout during
restart.
path. See SQL path.
PPT. (1) Processing program table (in CICS). (2)
PCT. Program control table (in CICS). Program properties table (in z/OS).
PDS. Partitioned data set. precision. In SQL, the total number of digits in a
decimal number (called the size in the C language). In
piece. A data set of a nonpartitioned page set. the C language, the number of digits to the right of the
decimal point (called the scale in SQL). The DB2 library
physical claim. A claim on an entire nonpartitioning uses the SQL terms.
index.
precompilation. A processing of application programs
physical consistency. The state of a page that is not containing SQL statements that takes place before
in a partially changed state. compilation. SQL statements are replaced with
statements that are recognized by the host language
physical drain. A drain on an entire nonpartitioning
compiler. Output from this precompilation includes
index.
source code that can be submitted to the compiler and
physical lock (P-lock). A type of lock that DB2 the database request module (DBRM) that is input to
acquires to provide consistency of data that is cached in the bind process.
different DB2 subsystems. Physical locks are used only
predicate. An element of a search condition that
in data sharing environments. Contrast with logical lock
expresses or implies a comparison operation.
(L-lock).
prefix. A code at the beginning of a message or
physical lock contention. Conflicting states of the
record.
requesters for a physical lock. See also negotiable lock.
preformat. The process of preparing a VSAM ESDS
physically complete. The state in which the
for DB2 use, by writing specific data patterns.
concurrent copy process is completed and the output
data set has been created. prepare. The first phase of a two-phase commit
process in which all participants are requested to
plan. See application plan.
prepare for commit.
plan allocation. The process of allocating DB2
prepared SQL statement. A named object that is the
resources to a plan in preparation for execution.
executable form of an SQL statement that has been
plan member. The bound copy of a DBRM that is processed by the PREPARE statement.
identified in the member clause.
presumed-abort. An optimization of the
plan name. The name of an application plan. presumed-nothing two-phase commit protocol that
reduces the number of recovery log records, the
plan segmentation. The dividing of each plan into duration of state maintenance, and the number of
sections. When a section is needed, it is independently messages between coordinator and participant. The
brought into the EDM pool. optimization also modifies the indoubt resolution
responsibility.
P-lock. Physical lock.
presumed-nothing. The standard two-phase commit
PLT. Program list table (in CICS). protocol that defines coordinator and participant
responsibilities, relative to logical unit of work states,
point of consistency. A time when all recoverable recovery logging, and indoubt resolution.
data that an application accesses is consistent with
other data. The term point of consistency is primary authorization ID. The authorization ID that is
synonymous with sync point or commit point. used to identify the application process to DB2.
primary group buffer pool. For a duplexed group program temporary fix (PTF). A solution or bypass of
buffer pool, the structure that is used to maintain the a problem that is diagnosed as a result of a defect in a
coherency of cached data. This structure is used for current unaltered release of a licensed program. An
page registration and cross-invalidation. The z/OS authorized program analysis report (APAR) fix is
equivalent is old structure. Compare with secondary corrective service for an existing problem. A PTF is
group buffer pool. preventive service for problems that might be
encountered by other users of the product. A PTF is
primary index. An index that enforces the uniqueness temporary, because a permanent fix is usually not
of a primary key. incorporated into the product until its next release.
primary key. In a relational database, a unique, protected conversation. A VTAM conversation that
nonnull key that is part of the definition of a table. A supports two-phase commit flows.
table cannot be defined as a parent unless it has a
unique key or primary key. PSRCP. Page set recovery pending.
principal. An entity that can communicate securely PTF. Program temporary fix.
with another entity. In Kerberos, principals are
represented as entries in the Kerberos registry database Pthread. The POSIX threading standard model for
and include users, servers, computers, and others. splitting an application into subtasks. The Pthread
standard includes functions for creating threads,
principal name. The name by which a principal is terminating threads, synchronizing threads through
known to the DCE security services. locking, and other thread control facilities.
private protocol connection. A DB2 private query. A component of certain SQL statements that
connection of the application process. See also private specifies a result table.
connection.
query block. The part of a query that is represented
privilege. The capability of performing a specific by one of the FROM clauses. Each FROM clause can
function, sometimes on a specific object. The types of have multiple query blocks, depending on DB2’s internal
privileges are: processing of the query.
explicit privileges, which have names and are held
query CP parallelism. Parallel execution of a single
as the result of SQL GRANT and REVOKE
query, which is accomplished by using multiple tasks.
statements. For example, the SELECT privilege.
See also Sysplex query parallelism.
implicit privileges, which accompany the ownership
of an object, such as the privilege to drop a query I/O parallelism. Parallel access of data, which
synonym that one owns, or the holding of an is accomplished by triggering multiple I/O requests
authority, such as the privilege of SYSADM authority within a single query.
to terminate any utility job.
queued sequential access method (QSAM). An
privilege set. For the installation SYSADM ID, the set extended version of the basic sequential access method
of all possible privileges. For any other authorization ID, (BSAM). When this method is used, a queue of data
the set of all privileges that are recorded for that ID in blocks is formed. Input data blocks await processing,
the DB2 catalog. and output data blocks await transfer to auxiliary
storage or to an output device.
process. In DB2, the unit to which DB2 allocates
resources and locks. Sometimes called an application quiesce point. A point at which data is consistent as a
process, a process involves the execution of one or result of running the DB2 QUIESCE utility.
more programs. The execution of an SQL statement is
always associated with some process. The means of quiesced member state. A state of a member of a
initiating and terminating a process are dependent on data sharing group. An active member becomes
the environment. quiesced when a STOP DB2 command takes effect
without a failure. If the member’s task, address space,
program. A single, compilable collection of executable or z/OS system fails before the command takes effect,
statements in a programming language. the member state is failed.
Glossary 1063
RACF • referential integrity
RCT. Resource control table (in CICS attachment recovery log. A collection of records that describes
facility). the events that occur during DB2 execution and
indicates their sequence. The recorded information is
RDB. Relational database. used for recovery in the event of a failure during DB2
execution.
RDBMS. Relational database management system.
recovery manager. (1) A subcomponent that supplies
RDBNAM. Relational database name. coordination services that control the interaction of DB2
resource managers during commit, abort, checkpoint,
RDF. Record definition field. and restart processes. The recovery manager also
supports the recovery mechanisms of other subsystems
read stability (RS). An isolation level that is similar to
(for example, IMS) by acting as a participant in the
repeatable read but does not completely isolate an
other subsystem’s process for protecting data that has
application process from all other concurrently executing
reached a point of consistency. (2) A coordinator or a
application processes. Under level RS, an application
participant (or both), in the execution of a two-phase
that issues the same query more than once might read
commit, that can access a recovery log that maintains
additional rows that were inserted and committed by a
the state of the logical unit of work and names the
concurrently executing application process.
immediate upstream coordinator and downstream
rebind. The creation of a new application plan for an participants.
application program that has been bound previously. If,
recovery pending (RECP). A condition that prevents
for example, you have added an index for a table that
SQL access to a table space that needs to be
your application accesses, you must rebind the
recovered.
application in order to take advantage of that index.
recovery token. An identifier for an element that is
rebuild. The process of reallocating a coupling facility
used in recovery (for example, NID or URID).
structure. For the shared communications area (SCA)
and lock structure, the structure is repopulated; for the RECP. Recovery pending.
group buffer pool, changed pages are usually cast out
to disk, and the new structure is populated only with redo. A state of a unit of recovery that indicates that
changed pages that were not successfully cast out. changes are to be reapplied to the disk media to ensure
data integrity.
RECFM. Record format.
reentrant. Executable code that can reside in storage
record. The storage representation of a row or other as one shared copy for all threads. Reentrant code is
data. not self-modifying and provides separate storage areas
for each thread. Reentrancy is a compiler and operating
record identifier (RID). A unique identifier that DB2
system concept, and reentrancy alone is not enough to
uses internally to identify a row of data in a table.
guarantee logically consistent results when
Compare with row ID.
multithreading. See also threadsafe.
| record identifier (RID) pool. An area of main storage referential constraint. The requirement that nonnull
| that is used for sorting record identifiers during values of a designated foreign key are valid only if they
| list-prefetch processing. equal values of the primary key of a designated table.
record length. The sum of the length of all the
referential integrity. The state of a database in which
columns in a table, which is the length of the data as it
all values of all foreign keys are valid. Maintaining
is physically stored in the database. Records can be
referential integrity requires the enforcement of
fixed length or varying length, depending on how the
referential constraints on all operations that change the
columns are defined. If all columns are fixed-length
data in a table on which the referential constraints are
columns, the record is a fixed-length record. If one or
defined.
more columns are varying-length columns, the record is
a varying-length column.
referential structure. A set of tables and relationships reoptimization, DB2 uses the values of host variables,
that includes at least one table and, for every table in parameter markers, or special registers.
the set, all the relationships in which that table
participates and all the tables to which it is related. REORG pending (REORP). A condition that restricts
SQL access and most utility access to an object that
| refresh age. The time duration between the current must be reorganized.
| time and the time during which a materialized query
| table was last refreshed. REORP. REORG pending.
registry. See registry database. repeatable read (RR). The isolation level that provides
maximum protection from other executing application
registry database. A database of security information programs. When an application program executes with
about principals, groups, organizations, accounts, and repeatable read protection, rows that the program
security policies. references cannot be changed by other programs until
the program reaches a commit point.
relational database (RDB). A database that can be
perceived as a set of tables and manipulated in repeating group. A situation in which an entity
accordance with the relational model of data. includes multiple attributes that are inherently the same.
The presence of a repeating group violates the
relational database management system (RDBMS). requirement of first normal form. In an entity that
A collection of hardware and software that organizes satisfies the requirement of first normal form, each
and provides access to a relational database. attribute is independent and unique in its meaning and
its name. See also normalization.
relational database name (RDBNAM). A unique
identifier for an RDBMS within a network. In DB2, this replay detection mechanism. A method that allows a
must be the value in the LOCATION column of table principal to detect whether a request is a valid request
SYSIBM.LOCATIONS in the CDB. DB2 publications from a source that can be trusted or whether an
refer to the name of another RDBMS as a LOCATION untrustworthy entity has captured information from a
value or a location name. previous exchange and is replaying the information
exchange to gain access to the principal.
relationship. A defined connection between the rows
of a table or the rows of two tables. A relationship is the request commit. The vote that is submitted to the
internal representation of a referential constraint. prepare phase if the participant has modified data and
is prepared to commit or roll back.
relative byte address (RBA). The offset of a data
record or control interval from the beginning of the requester. The source of a request to access data at
storage space that is allocated to the data set or file to a remote server. In the DB2 environment, the requester
which it belongs. function is provided by the distributed data facility.
remigration. The process of returning to a current resource. The object of a lock or claim, which could
release of DB2 following a fallback to a previous be a table space, an index space, a data partition, an
release. This procedure constitutes another migration index partition, or a logical partition.
process.
resource allocation. The part of plan allocation that
remote. Any object that is maintained by a remote deals specifically with the database resources.
DB2 subsystem (that is, by a DB2 subsystem other than
the local one). A remote view, for example, is a view resource control table (RCT). A construct of the
that is maintained by a remote DB2 subsystem. CICS attachment facility, created by site-provided macro
Contrast with local. parameters, that defines authorization and access
attributes for transactions or transaction groups.
remote attach request. A request by a remote
location to attach to the local DB2 subsystem. resource definition online. A CICS feature that you
Specifically, the request that is sent is an SNA Function use to define CICS resources online without assembling
Management Header 5. tables.
remote subsystem. Any relational DBMS, except the resource limit facility (RLF). A portion of DB2 code
local subsystem, with which the user or application can that prevents dynamic manipulative SQL statements
communicate. The subsystem need not be remote in from exceeding specified time limits. The resource limit
any physical sense, and might even operate on the facility is sometimes called the governor.
same processor under the same z/OS system.
resource limit specification table (RLST). A
reoptimization. The DB2 process of reconsidering the site-defined table that specifies the limits to be enforced
access path of an SQL statement at run time; during by the resource limit facility.
Glossary 1065
resource manager • scale
resource manager. (1) A function that is responsible row. The horizontal component of a table. A row
for managing a particular resource and that guarantees consists of a sequence of values, one for each column
the consistency of all updates made to recoverable of the table.
resources within a logical unit of work. The resource
that is being managed can be physical (for example, ROWID. Row identifier.
disk or main storage) or logical (for example, a
particular type of system service). (2) A participant, in row identifier (ROWID). A value that uniquely
the execution of a two-phase commit, that has identifies a row. This value is stored with the row and
recoverable resources that could have been modified. never changes.
The resource manager has access to a recovery log so
row lock. A lock on a single row of data.
that it can commit or roll back the effects of the logical
unit of work to the recoverable resources. | rowset. A set of rows for which a cursor position is
| established.
restart pending (RESTP). A restrictive state of a page
set or partition that indicates that restart (backout) work | rowset cursor. A cursor that is defined so that one or
needs to be performed on the object. All access to the | more rows can be returned as a rowset for a single
page set or partition is denied except for access by the: | FETCH statement, and the cursor is positioned on the
v RECOVER POSTPONED command | set of rows that is fetched.
v Automatic online backout (which DB2 invokes after
restart if the system parameter LBACKOUT=AUTO) | rowset-positioned access. The ability to retrieve
| multiple rows from a single FETCH statement.
RESTP. Restart pending.
| row-positioned access. The ability to retrieve a single
result set. The set of rows that a stored procedure | row from a single FETCH statement.
returns to a client application.
row trigger. A trigger that is defined with the trigger
result set locator. A 4-byte value that DB2 uses to granularity FOR EACH ROW.
uniquely identify a query result set that a stored
procedure returns. RRE. Residual recovery entry (in IMS).
result table. The set of rows that are specified by a RRSAF. Recoverable Resource Manager Services
SELECT statement. attachment facility.
retained lock. A MODIFY lock that a DB2 subsystem RS. Read stability.
was holding at the time of a subsystem failure. The lock
is retained in the coupling facility lock structure across a RTT. Resource translation table.
DB2 failure.
RURE. Restart URE.
RID. Record identifier.
| schema. (1) The organization or structure of a self-referencing table. A table with a self-referencing
| database. (2) A logical grouping for user-defined constraint.
| functions, distinct types, triggers, and stored
| procedures. When an object of one of these types is | sensitive cursor. A cursor that is sensitive to changes
| created, it is assigned to one schema, which is | that are made to the database after the result table has
| determined by the name of the object. For example, the | been materialized.
| following statement creates a distinct type T in schema
| C: | sequence. A user-defined object that generates a
| sequence of numeric values according to user
| CREATE DISTINCT TYPE C.T ... | specifications.
scrollability. The ability to use a cursor to fetch in sequential data set. A non-DB2 data set whose
either a forward or backward direction. The FETCH records are organized on the basis of their successive
statement supports multiple fetch orientations to indicate physical positions, such as on magnetic tape. Several of
the new position of the cursor. See also fetch the DB2 database utilities require sequential data sets.
orientation.
sequential prefetch. A mechanism that triggers
scrollable cursor. A cursor that can be moved in both consecutive asynchronous I/O operations. Pages are
a forward and a backward direction. fetched before they are required, and several pages are
read with a single I/O operation.
SDWA. System diagnostic work area.
serial cursor. A cursor that can be moved only in a
search condition. A criterion for selecting rows from a
forward direction.
table. A search condition consists of one or more
predicates. serialized profile. A Java object that contains SQL
statements and descriptions of host variables. The
secondary authorization ID. An authorization ID that
SQLJ translator produces a serialized profile for each
has been associated with a primary authorization ID by
connection context.
an authorization exit routine.
server. The target of a request from a remote
secondary group buffer pool. For a duplexed group
requester. In the DB2 environment, the server function
buffer pool, the structure that is used to back up
is provided by the distributed data facility, which is used
changed pages that are written to the primary group
to access DB2 data from remote applications.
buffer pool. No page registration or cross-invalidation
occurs using the secondary group buffer pool. The z/OS server-side programming. A method for adding DB2
equivalent is new structure. data into dynamic Web pages.
| secondary index. A nonpartitioning index on a service class. An eight-character identifier that is
| partitioned table. used by the z/OS Workload Manager to associate user
performance goals with a particular DDF thread or
section. The segment of a plan or package that
stored procedure. A service class is also used to
contains the executable structures for a single SQL
classify work on parallelism assistants.
statement. For most SQL statements, one section in the
plan exists for each SQL statement in the source service request block. A unit of work that is
program. However, for cursor-related statements, the scheduled to execute in another address space.
DECLARE, OPEN, FETCH, and CLOSE statements
reference the same section because they each refer to session. A link between two nodes in a VTAM
the SELECT statement that is named in the DECLARE network.
CURSOR statement. SQL statements such as COMMIT,
ROLLBACK, and some SET statements do not use a session protocols. The available set of SNA
section. communication requests and responses.
segment. A group of pages that holds rows of a single shared communications area (SCA). A coupling
table. See also segmented table space. facility list structure that a DB2 data sharing group uses
for inter-DB2 communication.
segmented table space. A table space that is divided
into equal-sized groups of pages called segments. share lock. A lock that prevents concurrently
Segments are assigned to tables so that rows of executing application processes from changing data, but
different tables are never stored in the same segment. not from reading data. Contrast with exclusive lock.
self-referencing constraint. A referential constraint shift-in character. A special control character (X'0F')
that defines a relationship in which a table is a that is used in EBCDIC systems to denote that the
dependent of itself. subsequent bytes represent SBCS characters. See also
shift-out character.
Glossary 1067
shift-out character • SQL path
shift-out character. A special control character (X'0E') | source table. A table that can be a base table, a view,
that is used in EBCDIC systems to denote that the | a table expression, or a user-defined table function.
subsequent bytes, up to the next shift-in control
character, represent DBCS characters. See also shift-in source type. An existing type that DB2 uses to
character. internally represent a distinct type.
sign-on. A request that is made on behalf of an space. A sequence of one or more blank characters.
individual CICS or IMS application process by an
attachment facility to enable DB2 to verify that it is special register. A storage area that DB2 defines for
authorized to use DB2 resources. an application process to use for storing information that
can be referenced in SQL statements. Examples of
simple page set. A nonpartitioned page set. A simple special registers are USER and CURRENT DATE.
page set initially consists of a single data set (page set
piece). If and when that data set is extended to 2 GB, specific function name. A particular user-defined
another data set is created, and so on, up to a total of function that is known to the database manager by its
32 data sets. DB2 considers the data sets to be a single specific name. Many specific user-defined functions can
contiguous linear address space containing a maximum have the same function name. When a user-defined
of 64 GB. Data is stored in the next available location function is defined to the database, every function is
within this address space without regard to any assigned a specific name that is unique within its
partitioning scheme. schema. Either the user can provide this name, or a
default name is used.
simple table space. A table space that is neither
partitioned nor segmented. SPUFI. SQL Processor Using File Input.
single-byte character set (SBCS). A set of characters SQL. Structured Query Language.
in which each character is represented by a single byte.
SQL authorization ID (SQL ID). The authorization ID
Contrast with double-byte character set or multibyte
that is used for checking dynamic SQL statements in
character set.
some situations.
single-precision floating point number. A 32-bit
SQLCA. SQL communication area.
approximate representation of a real number.
SQL communication area (SQLCA). A structure that
size. In the C language, the total number of digits in a
is used to provide an application program with
decimal number (called the precision in SQL). The DB2
information about the execution of its SQL statements.
library uses the SQL term.
SQL connection. An association between an
SMF. System Management Facilities.
application process and a local or remote application
SMP/E. System Modification Program/Extended. server or database server.
SNA. Systems Network Architecture. SQL descriptor area (SQLDA). A structure that
describes input variables, output variables, or the
SNA network. The part of a network that conforms to columns of a result table.
the formats and protocols of Systems Network
Architecture (SNA). SQL escape character. The symbol that is used to
enclose an SQL delimited identifier. This symbol is the
socket. A callable TCP/IP programming interface that double quotation mark ("). See also escape character.
TCP/IP network applications use to communicate with
remote TCP/IP partners. SQL function. A user-defined function in which the
CREATE FUNCTION statement contains the source
sourced function. A function that is implemented by code. The source code is a single SQL expression that
another built-in or user-defined function that is already evaluates to a single value. The SQL user-defined
known to the database manager. This function can be a function can return only one parameter.
scalar function or a column (aggregating) function; it
returns a single value from a set of values (for example, SQL ID. SQL authorization ID.
MAX or AVG). Contrast with built-in function, external
SQLJ. Structured Query Language (SQL) that is
function, and SQL function.
embedded in the Java programming language.
source program. A set of host language statements
SQL path. An ordered list of schema names that are
and SQL statements that is processed by an SQL
used in the resolution of unqualified references to
precompiler.
user-defined functions, distinct types, and stored
procedures. In dynamic SQL, the current path is found statement trigger. A trigger that is defined with the
in the CURRENT PATH special register. In static SQL, it trigger granularity FOR EACH STATEMENT.
is defined in the PATH bind option.
| static cursor. A named control structure that does not
SQL procedure. A user-written program that can be | change the size of the result table or the order of its
invoked with the SQL CALL statement. Contrast with | rows after an application opens the cursor. Contrast with
external procedure. | dynamic cursor.
SQL processing conversation. Any conversation that static SQL. SQL statements, embedded within a
requires access of DB2 data, either through an program, that are prepared during the program
application or by dynamic query requests. preparation process (before the program is executed).
After being prepared, the SQL statement does not
SQL Processor Using File Input (SPUFI). A facility of change (although values of host variables that are
the TSO attachment subcomponent that enables the specified by the statement might change).
DB2I user to execute SQL statements without
embedding them in an application program. storage group. A named set of disks on which DB2
data can be stored.
SQL return code. Either SQLCODE or SQLSTATE.
stored procedure. A user-written application program
SQL routine. A user-defined function or stored that can be invoked through the use of the SQL CALL
procedure that is based on code that is written in SQL. statement.
SQL statement coprocessor. An alternative to the string. See character string or graphic string.
DB2 precompiler that lets the user process SQL
statements at compile time. The user invokes an SQL strong typing. A process that guarantees that only
statement coprocessor by specifying a compiler option. user-defined functions and operations that are defined
on a distinct type can be applied to that type. For
SQL string delimiter. A symbol that is used to example, you cannot directly compare two currency
enclose an SQL string constant. The SQL string types, such as Canadian dollars and U.S. dollars. But
delimiter is the apostrophe ('), except in COBOL you can provide a user-defined function to convert one
applications, where the user assigns the symbol, which currency to the other and then do the comparison.
is either an apostrophe or a double quotation mark (").
structure. (1) A name that refers collectively to
SRB. Service request block. different types of DB2 objects, such as tables,
databases, views, indexes, and table spaces. (2) A
SSI. Subsystem interface (in z/OS). construct that uses z/OS to map and manage storage
on a coupling facility. See also cache structure, list
SSM. Subsystem member (in IMS).
structure, or lock structure.
stand-alone. An attribute of a program that means
Structured Query Language (SQL). A standardized
that it is capable of executing separately from DB2,
language for defining and manipulating data in a
without using DB2 services.
relational database.
star join. A method of joining a dimension column of a
structure owner. In relation to group buffer pools, the
fact table to the key column of the corresponding
DB2 member that is responsible for the following
dimension table. See also join, dimension, and star
activities:
schema.
v Coordinating rebuild, checkpoint, and damage
star schema. The combination of a fact table (which assessment processing
contains most of the data) and a number of dimension v Monitoring the group buffer pool threshold and
tables. See also star join, dimension, and dimension notifying castout owners when the threshold has
table. been reached
statement handle. In DB2 ODBC, the data object that subcomponent. A group of closely related DB2
contains information about an SQL statement that is modules that work together to provide a general
managed by DB2 ODBC. This includes information such function.
as dynamic arguments, bindings for dynamic arguments
and columns, cursor information, result values, and subject table. The table for which a trigger is created.
status information. Each statement handle is associated When the defined triggering event occurs on this table,
with the connection handle. the trigger is activated.
statement string. For a dynamic SQL statement, the subpage. The unit into which a physical index page
character string form of the statement. can be divided.
Glossary 1069
subquery • table space
subquery. A SELECT statement within the WHERE or system agent. A work request that DB2 creates
HAVING clause of another SQL statement; a nested internally such as prefetch processing, deferred writes,
SQL statement. and service tasks.
subselect. That form of a query that does not include system conversation. The conversation that two DB2
an ORDER BY clause, an UPDATE clause, or UNION subsystems must establish to process system
operators. messages before any distributed processing can begin.
substitution character. A unique character that is system diagnostic work area (SDWA). The data that
substituted during character conversion for any is recorded in a SYS1.LOGREC entry that describes a
characters in the source program that do not have a program or hardware error.
match in the target coding representation.
system-directed connection. A connection that a
subsystem. A distinct instance of a relational relational DBMS manages by processing SQL
database management system (RDBMS). statements with three-part names.
surrogate pair. A coded representation for a single System Modification Program/Extended (SMP/E). A
character that consists of a sequence of two 16-bit code z/OS tool for making software changes in programming
units, in which the first value of the pair is a systems (such as DB2) and for controlling those
high-surrogate code unit in the range U+D800 through changes.
U+DBFF, and the second value is a low-surrogate code
unit in the range U+DC00 through U+DFFF. Surrogate Systems Network Architecture (SNA). The
pairs provide an extension mechanism for encoding description of the logical structure, formats, protocols,
917 476 characters without requiring the use of 32-bit and operational sequences for transmitting information
characters. through and controlling the configuration and operation
of networks.
SVC dump. A dump that is issued when a z/OS or a
DB2 functional recovery routine detects an error. SYS1.DUMPxx data set. A data set that contains a
system dump (in z/OS).
sync point. See commit point.
SYS1.LOGREC. A service aid that contains important
syncpoint tree. The tree of recovery managers and information about program and hardware errors (in
resource managers that are involved in a logical unit of z/OS).
work, starting with the recovery manager, that make the
final commit decision.
T
synonym. In SQL, an alternative name for a table or
view. Synonyms can be used to refer only to objects at table. A named data object consisting of a specific
the subsystem in which the synonym is defined. number of columns and some number of unordered
rows. See also base table or temporary table.
syntactic character set. A set of 81 graphic
characters that are registered in the IBM registry as | table-controlled partitioning. A type of partitioning in
character set 00640. This set was originally | which partition boundaries for a partitioned table are
recommended to the programming language community | controlled by values that are defined in the CREATE
to be used for syntactic purposes toward maximizing | TABLE statement. Partition limits are saved in the
portability and interchangeability across systems and | LIMITKEY_INTERNAL column of the
country boundaries. It is contained in most of the | SYSIBM.SYSTABLEPART catalog table.
primary registered character sets, with a few exceptions.
table function. A function that receives a set of
See also invariant character set.
arguments and returns a table to the SQL statement
Sysplex. See Parallel Sysplex. that references the function. A table function can be
referenced only in the FROM clause of a subselect.
Sysplex query parallelism. Parallel execution of a
single query that is accomplished by using multiple table locator. A mechanism that allows access to
tasks on more than one DB2 subsystem. See also trigger transition tables in the FROM clause of SELECT
query CP parallelism. statements, in the subselect of INSERT statements, or
from within user-defined functions. A table locator is a
system administrator. The person at a computer fullword integer value that represents a transition table.
installation who designs, controls, and manages the use
of the computer system. table space. A page set that is used to store the
records in one or more tables.
table space set. A set of table spaces and partitions time duration. A decimal integer that represents a
that should be recovered together for one of these number of hours, minutes, and seconds.
reasons:
v Each of them contains a table that is a parent or timeout. Abnormal termination of either the DB2
descendent of a table in one of the others. subsystem or of an application because of the
v The set contains a base table and associated unavailability of resources. Installation specifications are
auxiliary tables. set to determine both the amount of time DB2 is to wait
for IRLM services after starting, and the amount of time
A table space set can contain both types of IRLM is to wait if a resource that an application
relationships. requests is unavailable. If either of these time
specifications is exceeded, a timeout is declared.
task control block (TCB). A z/OS control block that is
used to communicate information about tasks within an Time-Sharing Option (TSO). An option in MVS that
address space that are connected to DB2. See also provides interactive time sharing from remote terminals.
address space connection.
timestamp. A seven-part value that consists of a date
TB. Terabyte (1 099 511 627 776 bytes). and time. The timestamp is expressed in years, months,
days, hours, minutes, seconds, and microseconds.
TCB. Task control block (in z/OS).
TMP. Terminal Monitor Program.
TCP/IP. A network communication protocol that
computer systems use to exchange information across to-do. A state of a unit of recovery that indicates that
telecommunication links. the unit of recovery’s changes to recoverable DB2
resources are indoubt and must either be applied to the
TCP/IP port. A 2-byte value that identifies an end user
disk media or backed out, as determined by the commit
or a TCP/IP network application within a TCP/IP host.
coordinator.
template. A DB2 utilities output data set descriptor that
trace. A DB2 facility that provides the ability to monitor
is used for dynamic allocation. A template is defined by
and collect DB2 monitoring, auditing, performance,
the TEMPLATE utility control statement.
accounting, statistics, and serviceability (global) data.
temporary table. A table that holds temporary data.
transaction lock. A lock that is used to control
Temporary tables are useful for holding or sorting
concurrent execution of SQL statements.
intermediate results from queries that contain a large
number of rows. The two types of temporary table, transaction program name. In SNA LU 6.2
which are created by different SQL statements, are the conversations, the name of the program at the remote
created temporary table and the declared temporary logical unit that is to be the other half of the
table. Contrast with result table. See also created conversation.
temporary table and declared temporary table.
| transient XML data type. A data type for XML values
Terminal Monitor Program (TMP). A program that | that exists only during query processing.
provides an interface between terminal users and
command processors and has access to many system transition table. A temporary table that contains all
services (in z/OS). the affected rows of the subject table in their state
before or after the triggering event occurs. Triggered
thread. The DB2 structure that describes an SQL statements in the trigger definition can reference
application’s connection, traces its progress, processes the table of changed rows in the old state or the new
resource functions, and delimits its accessibility to DB2 state.
resources and services. Most DB2 functions execute
under a thread structure. See also allied thread and transition variable. A variable that contains a column
database access thread. value of the affected row of the subject table in its state
before or after the triggering event occurs. Triggered
threadsafe. A characteristic of code that allows SQL statements in the trigger definition can reference
multithreading both by providing private storage areas the set of old values or the set of new values.
for each thread, and by properly serializing shared
(global) storage areas. | tree structure. A data structure that represents entities
| in nodes, with a most one parent node for each node,
three-part name. The full name of a table, view, or | and with only one root node.
alias. It consists of a location name, authorization ID,
and an object name, separated by a period. trigger. A set of SQL statements that are stored in a
DB2 database and executed when a certain event
time. A three-part value that designates a time of day occurs in a DB2 table.
in hours, minutes, and seconds.
Glossary 1071
trigger activation • unit of recovery
trigger activation. The process that occurs when the typed parameter marker. A parameter marker that is
trigger event that is defined in a trigger definition is specified along with its target data type. It has the
executed. Trigger activation consists of the evaluation of general form:
the triggered action condition and conditional execution CAST(? AS data-type)
of the triggered SQL statements.
type 1 indexes. Indexes that were created by a
trigger activation time. An indication in the trigger release of DB2 before DB2 Version 4 or that are
definition of whether the trigger should be activated specified as type 1 indexes in Version 4. Contrast with
before or after the triggered event. type 2 indexes. As of Version 8, type 1 indexes are no
longer supported.
trigger body. The set of SQL statements that is
executed when a trigger is activated and its triggered type 2 indexes. Indexes that are created on a release
action condition evaluates to true. A trigger body is also of DB2 after Version 7 or that are specified as type 2
called triggered SQL statements. indexes in Version 4 or later.
trigger cascading. The process that occurs when the
triggered action of a trigger causes the activation of U
another trigger.
UCS-2. Universal Character Set, coded in 2 octets,
triggered action. The SQL logic that is performed which means that characters are represented in 16-bits
when a trigger is activated. The triggered action per character.
consists of an optional triggered action condition and a
set of triggered SQL statements that are executed only UDF. User-defined function.
if the condition evaluates to true.
UDT. User-defined data type. In DB2 UDB for z/OS,
triggered action condition. An optional part of the the term distinct type is used instead of user-defined
triggered action. This Boolean condition appears as a data type. See distinct type.
WHEN clause and specifies a condition that DB2
evaluates to determine if the triggered SQL statements uncommitted read (UR). The isolation level that
should be executed. allows an application to read uncommitted data.
triggered SQL statements. The set of SQL underlying view. The view on which another view is
statements that is executed when a trigger is activated directly or indirectly defined.
and its triggered action condition evaluates to true.
Triggered SQL statements are also called the trigger undo. A state of a unit of recovery that indicates that
body. the changes that the unit of recovery made to
recoverable DB2 resources must be backed out.
trigger granularity. A characteristic of a trigger, which
determines whether the trigger is activated: Unicode. A standard that parallels the ISO-10646
v Only once for the triggering SQL statement standard. Several implementations of the Unicode
v Once for each row that the SQL statement modifies standard exist, all of which have the ability to represent
a large percentage of the characters that are contained
triggering event. The specified operation in a trigger in the many scripts that are used throughout the world.
definition that causes the activation of that trigger. The
triggering event is comprised of a triggering operation uniform resource locator (URL). A Web address,
(INSERT, UPDATE, or DELETE) and a subject table on which offers a way of naming and locating specific items
which the operation is performed. on the Web.
triggering SQL operation. The SQL operation that union. An SQL operation that combines the results of
causes a trigger to be activated when performed on the two SELECT statements. Unions are often used to
subject table. merge lists of values that are obtained from several
tables.
trigger package. A package that is created when a
CREATE TRIGGER statement is executed. The unique constraint. An SQL rule that no two values in
package is executed when the trigger is activated. a primary key, or in the key of a unique index, can be
the same.
TSO. Time-Sharing Option.
unique index. An index that ensures that no identical
TSO attachment facility. A DB2 facility consisting of key values are stored in a column or a set of columns in
the DSN command processor and DB2I. Applications a table.
that are not written for the CICS or IMS environments
can run under the TSO attachment facility. unit of recovery. A recoverable sequence of
operations within a single resource manager, such as
an instance of DB2. Contrast with unit of work.
unit of recovery identifier (URID). The LOGRBA of user view. In logical data modeling, a model or
the first log record for a unit of recovery. The URID also representation of critical information that the business
appears in all subsequent log records for that unit of requires.
recovery.
UTF-8. Unicode Transformation Format, 8-bit encoding
unit of work. A recoverable sequence of operations form, which is designed for ease of use with existing
within an application process. At any time, an ASCII-based systems. The CCSID value for data in
application process is a single unit of work, but the life UTF-8 format is 1208. DB2 UDB for z/OS supports
of an application process can involve many units of UTF-8 in mixed data fields.
work as a result of commit or rollback operations. In a
multisite update operation, a single unit of work can UTF-16. Unicode Transformation Format, 16-bit
include several units of recovery. Contrast with unit of encoding form, which is designed to provide code
recovery. values for over a million characters and a superset of
UCS-2. The CCSID value for data in UTF-16 format is
Universal Unique Identifier (UUID). An identifier that 1200. DB2 UDB for z/OS supports UTF-16 in graphic
is immutable and unique across time and space (in data fields.
z/OS).
UUID. Universal Unique Identifier.
unlock. The act of releasing an object or system
resource that was previously locked and returning it to
general availability within DB2.
V
untyped parameter marker. A parameter marker that value. The smallest unit of data that is manipulated in
is specified without its target data type. It has the form SQL.
of a single question mark (?).
variable. A data element that specifies a value that
updatability. The ability of a cursor to perform can be changed. A COBOL elementary data item is an
positioned updates and deletes. The updatability of a example of a variable. Contrast with constant.
cursor can be influenced by the SELECT statement and
variant function. See nondeterministic function.
the cursor sensitivity option that is specified on the
DECLARE CURSOR statement. varying-length string. A character or graphic string
whose length varies within set limits. Contrast with
update hole. The location on which a cursor is
fixed-length string.
positioned when a row in a result table is fetched again
and the new values no longer satisfy the search version. A member of a set of similar programs,
condition. DB2 marks a row in the result table as an DBRMs, packages, or LOBs.
update hole when an update to the corresponding row A version of a program is the source code that is
in the database causes that row to no longer qualify for produced by precompiling the program. The program
the result table. version is identified by the program name and a
timestamp (consistency token).
update trigger. A trigger that is defined with the
A version of a DBRM is the DBRM that is produced
triggering SQL operation UPDATE.
by precompiling a program. The DBRM version is
upstream. The node in the syncpoint tree that is identified by the same program name and timestamp
responsible, in addition to other recovery or resource as a corresponding program version.
managers, for coordinating the execution of a two-phase A version of a package is the result of binding a
commit. DBRM within a particular database system. The
package version is identified by the same program
UR. Uncommitted read. name and consistency token as the DBRM.
A version of a LOB is a copy of a LOB value at a
URE. Unit of recovery element. point in time. The version number for a LOB is
stored in the auxiliary index entry for the LOB.
URID . Unit of recovery identifier.
view. An alternative representation of data from one or
URL. Uniform resource locator. more tables. A view can include all or some of the
columns that are contained in tables on which it is
user-defined data type (UDT). See distinct type.
defined.
user-defined function (UDF). A function that is
view check option. An option that specifies whether
defined to DB2 by using the CREATE FUNCTION
every row that is inserted or updated through a view
statement and that can be referenced thereafter in SQL
must conform to the definition of that view. A view check
statements. A user-defined function can be an external
option can be specified with the WITH CASCADED
function, a sourced function, or an SQL function.
Contrast with built-in function.
Glossary 1073
Virtual Storage Access Method (VSAM) • z/OS Distributed Computing Environment (z/OS
DCE)
CHECK OPTION, WITH CHECK OPTION, or WITH | XML node. The smallest unit of valid, complete
LOCAL CHECK OPTION clauses of the CREATE VIEW | structure in a document. For example, a node can
statement. | represent an element, an attribute, or a text string.
Virtual Storage Access Method (VSAM). An access | XML publishing functions. Functions that return XML
method for direct or sequential processing of fixed- and | values from SQL values.
varying-length records on disk devices. The records in a
VSAM data set or file can be organized in logical X/Open. An independent, worldwide open systems
sequence by a key field (key sequence), in the physical organization that is supported by most of the world’s
sequence in which they are written on the data set or largest information systems suppliers, user
file (entry-sequence), or by relative-record number (in organizations, and software companies. X/Open's goal
z/OS). is to increase the portability of applications by
combining existing and emerging standards.
Virtual Telecommunications Access Method
(VTAM). An IBM licensed program that controls XRF. Extended recovery facility.
communication and the flow of data in an SNA network
(in z/OS). Z
| volatile table. A table for which SQL operations
| z/OS. An operating system for the eServer™ product
| choose index access whenever possible.
| line that supports 64-bit real and virtual storage.
VSAM. Virtual Storage Access Method.
z/OS Distributed Computing Environment (z/OS
VTAM. Virtual Telecommunication Access Method (in DCE). A set of technologies that are provided by the
z/OS). Open Software Foundation to implement distributed
computing.
W
warm start. The normal DB2 restart process, which
involves reading and processing log records so that
data that is under the control of DB2 is consistent.
Contrast with cold start.
X
XCF. See cross-system coupling facility.
Bibliography 1077
v z/OS DFSMSdss Storage Administration eServer zSeries®
Reference, SC35-0424 v IBM eServer zSeries Processor
v z/OS DFSMShsm Managing Your Own Data, Resource/System Manager Planning Guide,
SC35-0420 SB10-7033
v z/OS DFSMSdfp: Using DFSMSdfp in the z/OS
Environment, SC26-7473 Fortran: VS Fortran
v z/OS DFSMSdfp Diagnosis Reference, v VS Fortran Version 2: Language and Library
GY27-7618 Reference, SC26-4221
v z/OS DFSMS: Implementing System-Managed v VS Fortran Version 2: Programming Guide for
Storage, SC27-7407 CMS and MVS, SC26-4222
v z/OS DFSMS: Macro Instructions for Data Sets,
SC26-7408 High Level Assembler
v z/OS DFSMS: Managing Catalogs, SC26-7409 v High Level Assembler for MVS and VM and
v z/OS DFSMS: Program Management, VSE Language Reference, SC26-4940
SA22-7643 v High Level Assembler for MVS and VM and
v z/OS MVS Program Management: Advanced VSE Programmer's Guide, SC26-4941
Facilities, SA22-7644
v z/OS DFSMSdfp Storage Administration ICSF
Reference, SC26-7402 v z/OS ICSF Overview, SA22-7519
v z/OS DFSMS: Using Data Sets, SC26-7410 v Integrated Cryptographic Service Facility
v DFSMS/MVS: Using Advanced Services , Administrator's Guide, SA22-7521
SC26-7400
v DFSMS/MVS: Utilities, SC26-7414 IMS Version 8
Bibliography 1079
v IBM TCP/IP for MVS: Planning and Migration v z/OS DCE Administration Guide, SC24-5904
Guide, SC31-7189 v z/OS DCE Introduction, GC24-5911
v z/OS DCE Messages and Codes, SC24-5912
TotalStorage® Enterprise Storage Server v z/OS Information Roadmap, SA22-7500
v RAMAC Virtual Array: Implementing v z/OS Introduction and Release Guide,
Peer-to-Peer Remote Copy, SG24-5680 GA22-7502
v Enterprise Storage Server Introduction and v z/OS JES2 Initialization and Tuning Guide,
Planning, GC26-7444 SA22-7532
v IBM RAMAC Virtual Array, SG24-6424 v z/OS JES3 Initialization and Tuning Guide,
SA22-7549
Unicode v z/OS Language Environment Concepts Guide,
v z/OS Support for Unicode: Using Conversion SA22-7567
Services, SA22-7649 v z/OS Language Environment Customization,
SA22-7564
Information about Unicode, the Unicode v z/OS Language Environment Debugging Guide,
consortium, the Unicode standard, and standards GA22-7560
conformance requirements is available at v z/OS Language Environment Programming
www.unicode.org Guide, SA22-7561
v z/OS Language Environment Programming
VTAM Reference, SA22-7562
v Planning for NetView, NCP, and VTAM, v z/OS Managed System Infrastructure for Setup
SC31-8063 User's Guide, SC33-7985
v VTAM for MVS/ESA Diagnosis, LY43-0078 v z/OS MVS Diagnosis: Procedures, GA22-7587
v VTAM for MVS/ESA Messages and Codes, v z/OS MVS Diagnosis: Reference, GA22-7588
GC31-8369 v z/OS MVS Diagnosis: Tools and Service Aids,
v VTAM for MVS/ESA Network Implementation GA22-7589
Guide, SC31-8370 v z/OS MVS Initialization and Tuning Guide,
v VTAM for MVS/ESA Operation, SC31-8372 SA22-7591
v VTAM for MVS/ESA Programming, SC31-8373 v z/OS MVS Initialization and Tuning Reference,
v VTAM for MVS/ESA Programming for LU 6.2, SA22-7592
SC31-8374 v z/OS MVS Installation Exits, SA22-7593
v VTAM for MVS/ESA Resource Definition v z/OS MVS JCL Reference, SA22-7597
Reference, SC31-8377 v z/OS MVS JCL User's Guide, SA22-7598
v z/OS MVS Planning: Global Resource
WebSphere family Serialization, SA22-7600
v WebSphere MQ Integrator Broker: v z/OS MVS Planning: Operations, SA22-7601
Administration Guide, SC34-6171 v z/OS MVS Planning: Workload Management,
v WebSphere MQ Integrator Broker for z/OS: SA22-7602
Customization and Administration Guide, v z/OS MVS Programming: Assembler Services
SC34-6175 Guide, SA22-7605
v WebSphere MQ Integrator Broker: Introduction v z/OS MVS Programming: Assembler Services
and Planning, GC34-5599 Reference, Volumes 1 and 2, SA22-7606 and
v WebSphere MQ Integrator Broker: Using the SA22-7607
Control Center, SC34-6168 v z/OS MVS Programming: Authorized Assembler
Services Guide, SA22-7608
z/Architecture™ v z/OS MVS Programming: Authorized Assembler
v z/Architecture Principles of Operation, Services Reference Volumes 1-4, SA22-7609,
SA22-7832 SA22-7610, SA22-7611, and SA22-7612
v z/OS MVS Programming: Callable Services for
z/OS High-Level Languages, SA22-7613
v z/OS C/C++ Programming Guide, SC09-4765 v z/OS MVS Programming: Extended
v z/OS C/C++ Run-Time Library Reference, Addressability Guide, SA22-7614
SA22-7821 v z/OS MVS Programming: Sysplex Services
v z/OS C/C++ User's Guide, SC09-4767 Guide, SA22-7617
v z/OS Communications Server: IP Configuration v z/OS MVS Programming: Sysplex Services
Guide, SC31-8875 Reference, SA22-7618
Bibliography 1081
1082 Application Programming and SQL Guide
Index
Special characters address space
initialization
_ (underscore)
CAF CONNECT command 811
assembler host variable 131
CAF OPEN command 813
: (colon)
sample scenarios 820, 867
assembler host variable 133
separate tasks 800, 831
C host variable 146
termination
COBOL 176
CAF CLOSE command 816
Fortran host variable 206
CAF DISCONNECT command 817
PL/I host variable 217
ALL quantified predicate 51
’ (apostrophe)
ALLOCATE CURSOR statement 649
string delimiter precompiler option 462
ambiguous cursor 401
AMODE link-edit option 472
ANY quantified predicate 52
A APOST precompiler option 462
abend application plan
before commit point 416 binding 475
DB2 807, 839 creating 472
effect on cursor position 113 dynamic plan selection for CICS applications 484
exit routines 822 invalidated 371
for synchronization calls 518 listing packages 475
IMS rebinding 370
U0102 523 using packages 366
U0775 419 application program
U0778 420 bill of materials 997
multiple-mode program 416 coding SQL statements
program 414 assembler 129
reason codes 823 coding conventions 70
return code posted to CAF CONNECT 809 data declarations 121
return code posted to RRSAF CONNECT 841 data entry 30
single-mode program 416 description 69
system dynamic SQL 535, 568
X″04E″ 515 host variables 71
ABRT parameter of CAF (call attachment facility) 815, selecting rows using a cursor 93
824 design considerations
access path bind 363
affects lock attributes 405 CAF 800
direct row access 253, 739 checkpoint 517
index-only access 739 IMS calls 517
low cluster ratio planning for changes 368
suggests table space scan 746 precompile 363
with list prefetch 769 programming for DL/I batch 516
multiple index access RRSAF 831
description 749 SQL statements 517
PLAN_TABLE 737 stored procedures 569
selection structure 795
influencing with SQL 713 synchronization call abends 518
problems 671 using ISPF (interactive system productivity
queries containing host variables 698 facility) 495
Visual Explain 713, 727 XRST call 517
table space scan 746 duplicate CALL statements 662
unique index with matching value 751 external stored procedures 581
ACQUIRE object extensions 279
option of BIND PLAN subcommand preparation
locking tables and table spaces 390 assembling 471
activity sample table 897 binding 366, 472
compiling 471
DB2 precompiler option defaults 469
Index X-3
calculated values (continued) CICS (continued)
summarizing group values 11 DSNTIAC subroutine (continued)
call attachment facility (CAF) 799 C 170
CALL DSNALI statement 807, 819 COBOL 202
CALL DSNRLI statement 840 PL/I 232
CALL statement environment planning 489
example 621 facilities
multiple 662 command language translator 470
SQL procedure 600 control areas 499
cardinality of user-defined table function EDF (execution diagnostic facility) 505
improving query performance 717 language interface module (DSNCLI)
Cartesian join 756 use in link-editing an application 472
CASE statement (SQL procedure) 600 logical unit of work 414
catalog statistics operating
influencing access paths 723 indoubt data 415
catalog table running a program 499
accessing 18 system failure 415
SYSIBM.LOCATIONS 427 preparing with JCL procedures 493
SYSIBM.SYSCOLUMNS 18 programming
SYSIBM.SYSTABAUTH 18 DFHEIENT macro 132
CCSID (coded character set identifier) sample applications 917, 920
controlling in COBOL programs 196 SYNCPOINT command 414
effect of DECLARE VARIABLE 77 storage handling
host variable 77 assembler 143
precompiler option 463 C 170
SQLDA 560 COBOL 202
character host variable PL/I 232
assembler 134 sync point 414
C 147 thread reuse 873
COBOL 178 unit of work 414
Fortran 207 claim
PL/I 218 effect of cursor WITH HOLD 403
character host variable array CLOSE
C 154 statement
COBOL 185 description 98
PL/I 221 recommendation 103
character large object (CLOB) 281 WHENEVER NOT FOUND clause 554, 565
character string CLOSE (connection function of CAF)
literals 70 description 804
mixed data 4 language examples 816
width of column in results 64 program example 824
check constraint syntax 815
check integrity 244 syntax usage 815
considerations 243 cluster ratio
CURRENT RULES special register effect 244 effects
defining 243 table space scan 746
description 243 with list prefetch 769
determining violations 893 COALESCE function 42
enforcement 244 COBOL application program
programming considerations 893 assignment rules 197
CHECK-pending status 244 character host variable 178
checkpoint fixed-length string 178
calls 415, 417 varying-length string 178
frequency 419 character host variable array 185
CHKP call, IMS 415, 417 fixed-length string array 185
CICS varying-length string array 185
attachment facility CODEPAGE compiler option 197
controlling from applications 873 coding SQL statements 170
programming considerations 873 compiling 471
DSNTIAC subroutine controlling CCSID 196
assembler 143 data type compatibility 198
Index X-5
CONNECT (connection function of CAF) (continued) CREATE TRIGGER (continued)
program example 824 example 261
syntax 809 timestamp 271
CONNECT (Type 1) statement 435 trigger naming 263
CONNECT precompiler option 463 CREATE VIEW statement 26
CONNECT statement, with DRDA access 427 created temporary table
connection instances 22
DB2 table space scan 746
connecting from tasks 795 use of NOT NULL 22
function of CAF working with 21
CLOSE 815, 824 CS (cursor stability)
CONNECT 809, 812, 824 optimistic concurrency control 394
description 802 page and row locking 394
DISCONNECT 816, 824 CURRENDATA option of BIND
OPEN 813, 824 plan and package options differ 402
sample scenarios 820, 821 CURRENT PACKAGESET special register
summary of behavior 819 dynamic plan switching 484
TRANSLATE 818, 826 identify package collection 476
function of RRSAF CURRENT RULES special register
AUTH SIGNON 849 effect on check constraints 244
CREATE THREAD 870 usage 482
description 834 CURRENT SERVER special register
IDENTIFY 841, 870 description 476
sample scenarios 867 saving value in application program 448
SIGNON 846, 870 CURRENT SQLID special register
summary of behavior 865 use in test 499
TERMINATE IDENTIFY 862, 870 value in INSERT statement 20
TERMINATE THREAD 861, 870 cursor
TRANSLATE 864 ambiguous 401
constants, syntax attributes
C 165 using GET DIAGNOSTICS 106
Fortran 210 using SQLCA 106
CONTINUE clause of WHENEVER statement 84 closing 98
CONTINUE handler (SQL procedure) CLOSE statement 103
description 603 deleting a current row 102
example 603 description 93
correlated reference duplicate 662
correlation name 54 dynamic scrollable 105
SQL rules 47 effect of abend on position 113
usage 47 example
using in subquery 54 retrieving backward with scrollable cursor 115
correlated subqueries 705 updating specific row with rowset-positioned
correlation name 54 cursor 117
CREATE GLOBAL TEMPORARY TABLE statement 22 updating with non-scrollable cursor 114
CREATE TABLE statement updating with rowset-positioned cursor 116
DEFAULT clause 20 insensitive scrollable 104
NOT NULL clause 19 maintaining position 112
PRIMARY KEY clause 246 non-scrollable 103
relationship names 248 open state 112
UNIQUE clause 20, 246 OPEN statement 95
usage 19 result table 93
CREATE THREAD, RRSAF row-positioned
description 835 declaring 93
effect of call order 865, 866 deleting a current row 97
implicit connection 835 description 93
language examples 861 end-of-data condition 95
program example 870 retrieving a row of data 96
syntax usage 859 steps in using 93
CREATE TRIGGER updating a current row 97
activation order 271 rowset-positioned
description 261 declaring 98
Index X-7
deadlock (continued) DELETE statement
with RELEASE(DEALLOCATE) 382 correlated subquery 55
X’00C90088’ reason code in SQLCA 378 description 37
debugging application programs 502 positioned
DEC15 FOR ROW n OF ROWSET clause 102
precompiler option 464 restrictions 97
rules 16 WHERE CURRENT clause 97, 102
DEC31 subquery 51
avoiding overflow 17 deleting
precompiler option 464 current rows 97
rules 17 data 37
decimal every row from a table 38
15 digit precision 16 rows from a table 37
31 digit precision 17 delimiter, SQL 70
arithmetic 16 department sample table
DECIMAL creating 20
constants 165 description 898
data type, in C 164 DESCRIBE CURSOR statement 649
function, in C 164 DESCRIBE INPUT statement 551
declaration DESCRIBE PROCEDURE statement 648
generator (DCLGEN) 121 DESCRIBE statement
in an application program 122 column labels 562
variables in CAF program examples 829 INTO clauses 556, 558
DECLARE (SQL procedure) 601 DFHEIENT macro 132
DECLARE CURSOR statement DFSLI000 (IMS language interface module) 472
description, row-positioned 93 direct row access 253, 739
description, rowset-positioned 98 DISCONNECT (connection function of CAF)
FOR UPDATE clause 94 description 804
multilevel security 94 language examples 817
prepared statement 553, 556 program example 824
scrollable cursor 104 syntax 816
WITH HOLD clause 112 syntax usage 816
WITH RETURN option 590 displaying
WITH ROWSET POSITIONING clause 98 table columns 18
DECLARE GLOBAL TEMPORARY TABLE table privileges 18
statement 23 DISTINCT
DECLARE TABLE statement clause of SELECT statement 7
advantages of using 71 unique values 7
assembler 131 distinct type
C 145 assigning values 351
COBOL 173 comparing types 350
description 71 description 349
Fortran 205 example
PL/I 215 argument of user-defined function (UDF) 354
table description 121 arguments of infix operator 354
DECLARE VARIABLE statement casting constants 354
changing CCSID 78 casting function arguments 354
coding 77 casting host variables 354
description 77 LOB data type 354
declared temporary table function arguments 353
including column defaults 24 strong typing 350
including identity columns 23 UNION of 352
instances 23 distributed data
ON COMMIT clause 25 choosing an access method 424
qualifier for 23 coordinating updates 433
remote access using a three-part name 429 copying a remote table 447
requirements 23 DBPROTOCOL bind option 423, 429
working with 21 encoding scheme of retrieved data 448
dedicated virtual memory pool 765 example
DEFER(PREPARE) 437 accessing remote temporary table 430
calling stored procedure at remote location 424
Index X-9
DSNHLI entry point to DSNALI dynamic prefetch
implicit calls 804 description 768
program example 828 dynamic SQL
DSNHLI entry point to DSNRLI advantages and disadvantages 536
implicit calls 835 assembler program 555
program example 869 C program 555
DSNHLI2 entry point to DSNALI 826 caching
DSNHPLI procedure 490 effect of RELEASE bind option 391
DSNMTV01 module 521 caching prepared statements 539
DSNRLI (RRSAF language interface module) COBOL application program 173
deleting 869 COBOL program 568
loading 869 description 535
DSNTEDIT CLIST 1003 effect of bind option REOPT(ALWAYS) 567
DSNTEP2 and DSNTEP4 sample program effect of WITH HOLD cursor 548
specifying SQL terminator 928 EXECUTE IMMEDIATE statement 546
DSNTEP2 sample program fixed-list SELECT statements 552, 554
how to run 921 Fortran program 205
parameters 922 host languages 545
program preparation 921 non-SELECT statements 545, 549
DSNTEP4 sample program PL/I 555
how to run 921 PREPARE and EXECUTE 547, 549
parameters 922 programming 535
program preparation 921 requirements 537
DSNTIAC subroutine restrictions 536
assembler 143 sample C program 944
C 170 statement caching 539
COBOL 202 statements allowed 1013
PL/I 232 using DESCRIBE INPUT 551
DSNTIAD sample program varying-list SELECT statements 554, 567
how to run 921 DYNAMICRULES bind option 479
parameters 922
program preparation 921
specifying SQL terminator 926 E
DSNTIAR subroutine ECB (event control block)
assembler 142 address in CALL DSNALI parameter list 807
C 169 CONNECT connection function of CAF 809, 812
COBOL 201 CONNECT, RRSAF 841
description 89 program example 824, 826
Fortran 212 programming with CAF (call attachment facility) 824
PL/I 230 EDIT panel, SPUFI
return codes 90 empty 60
using 90 SQL statements 61
DSNTIAUL sample program embedded semicolon
how to run 921 embedded 926
parameters 922 employee photo and resume sample table 902
program preparation 921 employee sample table 899
DSNTIR subroutine 212 employee-to-project-activity sample table 906
DSNTPSMP stored procedure 612 ENCRYPT_TDES function 250
DSNTRACE data set 822 END-EXEC delimiter 70
DSNXDBRM 457 end-of-data condition 95, 99
DSNXNBRM 457 error
duplicate CALL statements 662 arithmetic expression 84
duration of locks division by zero 84
controlling 390 handling 83
description 386 messages generated by precompiler 509
DYNAM option of COBOL 174 overflow 84
dynamic plan selection return codes 82
restrictions with CURRENT PACKAGESET special run 508
register 484 ESTAE routine in CAF (call attachment facility) 822
using packages with 484 exception condition handling 83
Index X-11
global transaction host variable (continued)
RRSAF support 847 graphic
glossary 1041 assembler 135
GO TO clause of WHENEVER statement 84 C 149
GOTO statement (SQL procedure) 600 COBOL 179
governor (resource limit facility) 543 PL/I 218
GRANT statement 500 impact on access path selection 698
graphic host variable in equal predicate 702
assembler 135 inserting into tables 75
C 149 LOB
COBOL 179 assembler 284
PL/I 218 C 285
graphic host variable array COBOL 285
C 156 Fortran 286
COBOL 187 PL/I 287
PL/I 221 naming a structure
GRAPHIC precompiler option 464 C 158
GROUP BY clause COBOL 189
effect on OPTIMIZE clause 715 PL/I program 223
use with aggregate functions 11 numeric
assembler 133
C 147
H COBOL 176
handler, using in SQL procedure 603 Fortran 207
HAVING clause PL/I 218
selecting groups subject to conditions 12 PL/I 217
subquery 51 PREPARE statement 553
HOST REXX 237
FOLD value for C and CPP 464 selecting single row 73
precompiler option 464 static SQL flexibility 536
host language tuning queries 698
declarations in DB2I (DB2 Interactive) 121 updating values in tables 74
dynamic SQL 545 using 72
host structure using INSERT with VALUES clause 75
C 158 using SELECT INTO 73
COBOL 189 using SELECT INTO with aggregate function 74
description 72, 80 using SELECT INTO with expressions 74
PL/I 223 host variable array
retrieving row of data 80 C 146, 153
using SELECT INTO 80 character
host variable C 154
assembler 133 COBOL 185
C 146, 147 PL/I 221
changing CCSID 77 COBOL 175, 183
character description 72, 78
assembler 134 graphic
C 147 C 156
COBOL 178 COBOL 187
Fortran 207 PL/I 221
PL/I 218 indicator variable array 79
COBOL 175, 176 inserting multiple rows 79
description 71 numeric
example query 698 C 153
FETCH statement 554 COBOL 183
floating-point PL/I 220
assembler 134 PL/I 217, 220
C/C++ 164 retrieving multiple rows 78
COBOL 176 hybrid join
PL/I 227 description 758
Fortran 206, 207
Index X-13
JCL (job control language) (continued) large object (LOB) (continued)
precompilation procedures 489 description 281
precompiler option list format 491 expression 289
preparing a CICS program 493 indicator variable 291
preparing a object-oriented program 495 locator 288
starting a TSO batch application 487 materialization 288
join operation sample applications 284
Cartesian 756 LEAVE statement (SQL procedure) 600
description 752 LEFT OUTER JOIN clause 42
FULL OUTER JOIN 41 level of a lock 384
hybrid LEVEL precompiler option 464
description 758 limited partition scan 743
INNER JOIN 40 LINECOUNT precompiler option 465
join sequence 760 link-editing 471
joining a table to itself 41 list prefetch
joining tables 39 description 768
LEFT OUTER JOIN 42 thresholds 769
merge scan 757 load module structure of CAF (call attachment
more than one join 45 facility) 802
more than one join type 45 load module structure of RRSAF 836
nested loop 755 LOAD MVS macro used by CAF 801
operand LOAD MVS macro used by RRSAF 832
nested table expression 46 LOB
user-defined table function 46 lock
RIGHT OUTER JOIN 43 concurrency with UR readers 399
SQL rules 44 description 408
star join 760 LOB (large object)
star schema 760 lock duration 410
join sequence LOCK TABLE statement 411
definition 677 locking 408
modes of LOB locks 410
modes of table space locks 410
K LOB column, definition 281
KEEPDYNAMIC option LOB variable
BIND PACKAGE subcommand 541 assembler 135
BIND PLAN subcommand 541 C 152
key COBOL 182
composite 246 Fortran 207
foreign 248 PL/I 219
parent 245 LOB variable array
primary C 157
choosing 245 COBOL 188
defining 246 PL/I 222
recommendations for defining 247 lock
using timestamp 245 avoidance 400
unique 887 benefits 376
keywords, reserved 1009 class
transaction 375
compatibility 388
L description 375
label, column 562 duration
language interface modules controlling 390
DSNALI 592 description 386
DSNCLI 472 LOBs 410
DSNRLI 592 effect of cursor WITH HOLD 402
program preparation 363 effects
large object (LOB) deadlock 377
data space 288 suspension 376
declaring host variables 284 timeout 376
declaring LOB locators 284 escalation
defining and moving data into DB2 281 when retrieving large numbers of rows 891
Index X-15
naming convention (continued) OPEN
PL/I 215 statement
REXX 236 opening a cursor 95
tables you create 20 opening a rowset cursor 98
NATIONAL data type 196 performance 772
nested table expression prepared SELECT 553
correlated reference 46 USING DESCRIPTOR clause 566
correlation name 46 without parameter markers 564
join operation 46 OPEN (connection function of CAF)
processing 773 description 804
NEWFUN language examples 814
enabling V8 new object 453 program example 824
precompiler option 465 syntax 813
NODYNAM option of COBOL 174 syntax usage 813
NOFOR precompiler option 465 optimistic concurrency control 394
NOGRAPHIC precompiler option 465 OPTIMIZE FOR n ROWS clause 714
noncorrelated subqueries 706 effect on distributed performance 442
nonsegmented table space interaction with FETCH FIRST clause 714
scan 747 OPTIONS precompiler option 466
nontabular data storage 893 ORDER BY clause
NOOPTIONS precompiler option 465 derived columns 10
NOPADNTSTR precompiler option 466 effect on OPTIMIZE clause 715
NOSOURCE precompiler option 466 SELECT statement 10
NOT FOUND clause of WHENEVER statement 84 with AS clause 10
notices, legal 1037 organization application
NOXREF precompiler option 466 examples 915
NUL character in C 146 originating task 788
NUL-terminated string in C 165 outer join
NULL EXPLAIN report 754
pointer in C 146 FULL OUTER JOIN 41
null value LEFT OUTER JOIN 42
column value of UPDATE statement 36 materialization 754
host structure 81 RIGHT OUTER JOIN 43
indicator variable 75
indicator variable array 79
inserting into columns 76 P
IS DISTINCT FROM predicate 76 package
IS NULL predicate 76 advantages 367
Null, in REXX 236 binding
numeric DBRM to a package 472
data EXPLAIN option for remote 736
width of column in results 64 PLAN_TABLE 729
numeric data remote 473
description 3 to plans 475
numeric host variable deciding how to use 366
assembler 133 identifying at run time 475
C 147 invalidated 371
COBOL 176 dropping objects 369
Fortran 207 listing 475
PL/I 218 location 476
numeric host variable array rebinding examples 370
C 153 rebinding with pattern-matching characters 369
COBOL 183 selecting 475, 476
PL/I 220 trigger 371
version, identifying 479
PADNTSTR precompiler option 466
O page
object of a lock 389 locks
object-oriented program, preparation 495 description 384
ON clause, joining tables 39 PAGE_RANGE column of PLAN_TABLE 743
ONEPASS precompiler option 466
Index X-17
predicate (continued) REBIND PLAN subcommand of DSN (continued)
stage 2 options (continued)
evaluated 677 RELEASE 390
influencing creation 720 remote 473
subquery 676 REBIND TRIGGER PACKAGE subcommand of
predictive governing DSN 371
in a distributed environment 544 rebinding
with DEFER(PREPARE) 544 automatically
writing an application for 544 conditions for 371
PREPARE statement EXPLAIN processing 735
dynamic execution 548 changes that require 368
host variable 553 list of plans and packages 371
INTO clause 556 lists of plans or packages 1003
prepared SQL statement options for 366
caching 541 packages with pattern-matching characters 369
statements allowed 1013 planning for 373
PRIMARY KEY clause plans 370
ALTER TABLE statement 247 plans or packages in use 366
CREATE TABLE statement 246 Recoverable Resource Manager Services attachment
PRIMARY_ACCESSTYPE column of facility (RRSAF)
PLAN_TABLE 739 See RRSAF
problem determination, guidelines 508 recovery
program preparation 453 identifying application requirements 418
program problems checklist IMS application program 415
documenting error situations 502 IMS batch 421
error messages 503 planning for 413
project activity sample table 905 recursive SQL
project application, description 915 controlling depth 1000
project sample table 904 description 15
examples 997
infinite loops 16
Q rules 15
query parallelism 785 single level explosion 997
QUOTE precompiler option 466 summarized explosion 999
QUOTESQL precompiler option 467 referential constraint
defining 245
description 245
R determining violations 893
reason code informational 250
CAF name 248
translation 823, 826 on tables with data encryption 250
X″00C10824″ 816, 817 on tables with multilevel security 250
X″00F30050″ 822 referential integrity
X″00F30083″ 822 effect on subqueries 56
X’00C90088’ 378 programming considerations 893
X’00C9008E’ 377 register conventions
X″00D44057″ 515 CAF (call attachment facility) 807
REBIND PACKAGE subcommand of DSN RRSAF 840
generating list of 1003 RELEASE
options option of BIND PLAN subcommand
ISOLATION 394 combining with other options 390
RELEASE 390 release information block (RIB) 807
rebinding with wildcard characters 369 RELEASE LOCKS field of panel DSNTIP4
remote 473 effect on page and row locks 402
REBIND PLAN subcommand of DSN RELEASE SAVEPOINT statement 422
generating list of 1003 RELEASE statement, with DRDA access 428
options reoptimizing access path 699
ACQUIRE 390 REPEAT statement (SQL procedure) 600
ISOLATION 394 REPLACE statement (COBOL) 175
NOPKLIST 370 reserved keywords 1009
PKLIST 370
Index X-19
rowset cursor (continued) sample application (continued)
multiple-row FETCH 99 structure of 911
opening 98 use 917
using 98 user-defined function 916
rowset parameter, DB2 for z/OS support for 447 sample program
RR (repeatable read) DSN8BC3 202
how locks are held (figure) 398 DSN8BD3 170
page and row locking 398 DSN8BE3 170
RRS global transaction DSN8BF3 213
RRSAF support 847 DSN8BP3 231
RRSAF sample table
application program DSN8810.ACT (activity) 897
examples 869 DSN8810.DEMO_UNICODE (Unicode sample ) 907
preparation 832 DSN8810.DEPT (department) 898
connecting to DB2 870 DSN8810.EMP (employee) 899
description 831 DSN8810.EMP_PHOTO_RESUME (employee photo
function descriptions 840 and resume) 902
load module structure 836 DSN8810.EMPPROJACT (employee-to-project
programming language 832 activity) 906
register conventions 840 DSN8810.PROJ (project) 904
restrictions 831 PROJACT (project activity) 905
return codes views on 908
AUTH SIGNON 849 savepoint
CONNECT 841 description 421
SIGNON 846 distributed environment 430
TERMINATE IDENTIFY 862 RELEASE SAVEPOINT statement 422
TERMINATE THREAD 861 restrictions on use 422
TRANSLATE 864 ROLLBACK TO SAVEPOINT 422
run environment 833 SAVEPOINT statement 422
RRSAF (Recoverable Resource Manager Services setting multiple times 421
attachment facility) use with DRDA access 421
transactions SAVEPOINT statement 422
using global transactions 383 scope of a lock 384
RS (read stability) scrollable cursor
page and row locking (figure) 397 comparison of types 108
RUN subcommand of DSN DB2 UDB for z/OS down-level requester 447
return code processing 486 distributed environment 430
running a program in TSO foreground 485 dynamic
running application program dynamic model 105
CICS 489 fetching current row 109
errors 508 fetch orientation 107
IMS 488 optimistic concurrency control 394
performance considerations 710
retrieving rows 107
S sensitive dynamic 105
sample application sensitive static 104
call attachment facility 800 sensitivity 109
databases, for 912 static
DB2 private protocol access 969 creating delete hole 109
DRDA access 961 creating update hole 110
dynamic SQL 944 holes in result table 109
environments 917 number of rows 107
languages 917 removing holes 111
LOB 916 static model 105
organization 915 updatable 104
phone 915 scrolling
programs 917 backward through data 887
project 915 backward using identity columns 888
RRSAF 832 backward using ROWIDs 888
static SQL 944 in any direction 889
stored procedure 915 ISPF (Interactive System Productivity Facility) 64
Index X-21
special register (continued) SQL procedure statement
CURRENT RULES 482 CALL statement 600
user-defined functions 324 CASE statement 600
SPUFI compound statement 600
browsing output 63 CONTINUE handler 603
changed column widths 64 EXIT handler 603
created column heading 64 GET DIAGNOSTICS statement 600
default values 60 GOTO statement 600
entering comments 62 handler 603
panels handling errors 603
filling in 59 IF statement 600
format and display output 63 ITERATE statement 600
previous values displayed on panel 59 LEAVE statement 600
selecting on DB2I menu 59 LOOP statement 600
processing SQL statements 59, 62 REPEAT statement 600
retrieving Unicode data 61 RESIGNAL statement 601
setting SQL terminator 62 RETURN statement 601
specifying SQL statement terminator 60 SIGNAL statement 601
SQLCODE returned 63 SQL statement 600
SQL (Structured Query Language) WHILE statement 600
checking execution 82 SQL statement (SQL procedure) 600
coding SQL statement coprocessor
assembler 129 for C 458
basics 69 for C++ 459
C 143 for COBOL 460
C++ 143 for PL/I 461
COBOL 170 processing SQL statements 454
dynamic 568 SQL statement nesting
Fortran 203 restrictions 346
Fortran program 204 stored procedures 346
object extensions 279 user-defined functions 346
PL/I 213 SQL statement terminator
REXX 232 modifying in DSNTEP2 and DSNTEP4 928
cursors 93 modifying in DSNTIAD 926
dynamic modifying in SPUFI 60
coding 535 specifying in SPUFI 60
sample C program 944 SQL statements
statements allowed 1013 ALLOCATE CURSOR 649
host variable arrays 71 ALTER FUNCTION 296
host variables 71 ASSOCIATE LOCATORS 649
keywords, reserved 1009 CLOSE 98, 103, 554
return codes COBOL program sections 172
checking 82 coding REXX 235
handling 89 comments
statement terminator 926 assembler 131
structures 71 C 145
syntax checking 429 COBOL 173
varying-list 554, 567 Fortran 205
SQL communication area (SQLCA) PL/I 214
description 82 REXX 235
using DSNTIAR to format 89 CONNECT (Type 1) 435
SQL precompiler option 467 CONNECT (Type 2) 435
SQL procedure CONNECT, with DRDA access 427
conditions, handling 603 continuation
forcing SQL error 607 assembler 131
preparation using DSNTPSMP procedure 610 C 145
program preparation 609 COBOL 173
referencing SQLCODE and SQLSTATE 604 Fortran 205
SQL variable 601 PL/I 215
statements allowed 1018 REXX 236
CREATE FUNCTION 296
Index X-23
SQLSTATE (continued) stored procedure (continued)
″2D521″ 420, 515 using temporary tables in 591
″57015″ 519 WLM_REFRESH 1025
referencing in SQL procedure 604 writing 581
values 83 writing in REXX 594
SQLVAR field of SQLDA 559 stormdrain effect 874
SQLWARNING clause of WHENEVER statement 83 string
SSN (subsystem name) data type 3
CALL DSNALI parameter list 807 fixed-length
parameter in CAF CONNECT function 809 assembler 134
parameter in CAF OPEN function 813 COBOL 178
parameter in RRSAF CONNECT function 841 PL/I 229
SQL calls to CAF (call attachment facility) 804 host variables in C 165
SQL calls to RRSAF (recoverable resources services varying-length
attachment facility) 835 assembler 134
star join 760 COBOL 178
dedicated virtual memory pool 765 PL/I 229
star schema subquery
defining indexes for 720 basic predicate 51
state conceptual overview 49
of a lock 386 correlated
statement table DELETE statement 55
column descriptions 780 description 53
static SQL example 53
description 535 tuning 705
host variables 536 UPDATE statement 55
sample C program 944 DELETE statement 55
STDDEV function description 49
when evaluation occurs 745 EXISTS predicate 52
STDSQL precompiler option 468 IN predicate 52
STOP DATABASE command join transformation 707
timeout 377 noncorrelated 706
storage quantified predicate 51
acquiring referential constraints 56
retrieved row 559 restrictions with DELETE 56
SQLDA 557 tuning 705
addresses in SQLDA 560 tuning examples 709
storage group, for sample application data 912 UPDATE statement 55
stored procedure use with UPDATE, DELETE, and INSERT 51
accessing transition tables 328, 654 subsystem name (SSN) 804, 835
binding 593 subsystem parameters
CALL statement 621 MAX_NUM_CUR 662
calling from a REXX procedure 654 MAX_ST_PROC 662
defining parameter lists 627, 628, 629 summarizing group values 11
defining to DB2 575 SYNC call, IMS 415
DSNACICS 1029 SYNC parameter of CAF (call attachment facility) 815,
example 570 824
invoking from a trigger 269 synchronization call abends 518
languages supported 581 SYNCPOINT command of CICS 414, 415
linkage conventions 624 syntax diagram
returning non-relational data 591 how to read xx
returning result set 590 SYSLIB data sets 490
running as authorized program 592 Sysplex query parallelism
running multiple instances 662 splitting large queries across DB2 members 785
statements allowed 1016 SYSPRINT precompiler output
testing 664 options section 510
usage 569 source statements section, example 511
use of special registers 586 summary section, example 512
using COMMIT in 586 symbol cross-reference section 512
using host variables with 573 used to analyze errors 510
using ROLLBACK in 586 SYSTERM output to analyze errors 509
Index X-25
TSO (continued) UPDATE statement (continued)
DSNALI language interface module 801 subquery 51
TEST command 503 updating
tuning during retrieval 890
DB2 large volumes 890
queries containing host variables 698 values from host variables 74
two-phase commit, definition 433 UR (uncommitted read)
TWOPASS precompiler option 468 concurrent access restrictions 399
effect on reading LOBs 409
page and row locking 396
U recommendation 383
Unicode USE AND KEEP EXCLUSIVE LOCKS option of WITH
data, retrieving from DB2 UDB for z/OS 561 clause 403
sample table 907 USE AND KEEP SHARE LOCKS option of WITH
UNION clause clause 403
columns of result table 13 USE AND KEEP UPDATE LOCKS option of WITH
combining SELECT statements 13 clause 403
effect on OPTIMIZE clause 715 USER special register
eliminating duplicates 13 value in INSERT statement 20
keeping duplicates with ALL 13 value in UPDATE statement 36
removing duplicates with sort 772 user-defined function
UNIQUE clause 246 statements allowed 1016
unit of recovery user-defined function (UDF)
indoubt abnormal termination 346
recovering CICS 415 accessing transition tables 328
restarting IMS 416 ALTER FUNCTION statement 296
unit of work authorization ID 334
CICS description 414 call type 311
completion casting arguments 345
commit 414 characteristics 296
open cursors 112 coding guidelines 300
releasing locks 413 concurrent 335
roll back 414 CREATE FUNCTION statement 296
TSO 413 data type promotion 342
description 413 DBINFO structure 313
DL/I batch 419 definer 294
duration 413 defining 296
IMS description 293
batch 419 diagnostic message 310
commit point 415 DSN_FUNCTION_TABLE 343
ending 415 example
starting point 415 external scalar 294, 298
prevention of data access by other users 413 external table 300
TSO function resolution 342
COMMIT statement 413 overloading operator 299
completion 413 sourced 299
ROLLBACK statement 413 SQL 300
updatable cursor 94 function resolution 339
UPDATE host data types
lock mode assembler 305
page 387 C 305
row 387 COBOL 305
table, partition, and table space 387 PL/I 305
UPDATE statement implementer 294
correlated subqueries 55 implementing 300
description 36 indicators
positioned input 309
FOR ROW n OF ROWSET 102 result 310
restrictions 97 invoker 294
WHERE CURRENT clause 97, 101 invoking 338
SET clause 36 invoking from a trigger 269
Index X-27
WITH HOLD cursor
effect on dynamic SQL 548
effect on locks and claims 402
WLM_REFRESH stored procedure
description 1025
option descriptions 1026, 1028
sample JCL 1027, 1028
syntax diagram 1026
write-down privilege 273
X
XREF precompiler option 468
XRST call, IMS 417
Overall, how satisfied are you with the information in this book?
How satisfied are you that the information in this book is:
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you.
Name Address
Company or Organization
Phone No.
___________________________________________________________________________________________________
Readers’ Comments — We’d Like to Hear from You Cut or Fold
SC18-7415-00 Along Line
_ _ _ _ _ _ _Fold
_ _ _and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ _do
_ _not
_ _ staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
_________________________________________________________________________________________
Fold and Tape Please do not staple Fold and Tape
Cut or Fold
SC18-7415-00 Along Line
Printed in USA
SC18-7415-00
Spine information:
IBM DB2 Universal Database for z/OS Version 8 Application Programming and SQL Guide