DB2 Universal Database For z/OS: Application Programming and SQL Guide
DB2 Universal Database For z/OS: Application Programming and SQL Guide
Version 8
SC18-7415-05
Version 8
SC18-7415-05
Note Before using this information and the product it supports, be sure to read the general information under Notices on page 1183.
Sixth Edition, Softcopy Only (February 2008) This edition applies to Version 8 of IBM DB2 Universal Database for z/OS (DB2 UDB for z/OS), product number 5625-DB2, and to any subsequent releases until otherwise indicated in new editions. Make sure you are using the correct edition for the level of the product. This softcopy version is based on the printed edition of the book and includes the changes indicated in the printed version by vertical bars. Additional changes made to this softcopy version of the book since the hardcopy book was published are indicated by the hash (#) symbol in the left-hand margin. Editorial changes that have no technical significance are not noted. This and other books in the DB2 UDB for z/OS library are periodically updated with technical changes. These updates are made available to licensees of the product on CD-ROM and on the Web (currently at www.ibm.com/software/data/db2/zos/library.html). Check these resources to ensure that you are using the most current information. Copyright International Business Machines Corporation 1983, 2008. All rights reserved. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
About this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Who should read this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Terminology and citations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi How to read the syntax diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii How to send your comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
| | | | |
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
19 19 21 25 25 25 26 27 27 27 31 36 37
iii
iv
. 98
| | | | | |
| |
Contents
| |
Declaring host variable arrays . . . . . . . . . . . . . . Using host structures . . . . . . . . . . . . . . . . . Determining equivalent SQL and COBOL data types . . . . . . Determining compatibility of SQL and COBOL data types . . . . Using indicator variables and indicator variable arrays . . . . . Handling SQL error return codes . . . . . . . . . . . . . Coding considerations for object-oriented extensions in COBOL . . Coding SQL statements in a Fortran application . . . . . . . . . Defining the SQL communication area . . . . . . . . . . . Defining SQL descriptor areas . . . . . . . . . . . . . . Embedding SQL statements . . . . . . . . . . . . . . Using host variables . . . . . . . . . . . . . . . . . Declaring host variables . . . . . . . . . . . . . . . . Determining equivalent SQL and Fortran data types . . . . . . Determining compatibility of SQL and Fortran data types . . . . Using indicator variables . . . . . . . . . . . . . . . Handling SQL error return codes . . . . . . . . . . . . . Coding SQL statements in a PL/I application . . . . . . . . . Defining the SQL communication area . . . . . . . . . . . Defining SQL descriptor areas . . . . . . . . . . . . . . Embedding SQL statements . . . . . . . . . . . . . . Using host variables and host variable arrays. . . . . . . . . Declaring host variables . . . . . . . . . . . . . . . . Declaring host variable arrays . . . . . . . . . . . . . . Using host structures . . . . . . . . . . . . . . . . . Determining equivalent SQL and PL/I data types . . . . . . . Determining compatibility of SQL and PL/I data types . . . . . Using indicator variables and indicator variable arrays . . . . . Handling SQL error return codes . . . . . . . . . . . . . Coding SQL statements in a REXX application . . . . . . . . . Defining the SQL communication area . . . . . . . . . . . Defining SQL descriptor areas . . . . . . . . . . . . . . Accessing the DB2 REXX Language Support application programming Embedding SQL statements in a REXX procedure . . . . . . . Using cursors and statement names . . . . . . . . . . . . Using REXX host variables and data types . . . . . . . . . Using indicator variables . . . . . . . . . . . . . . . Setting the isolation level of SQL statements in a REXX procedure . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . interfaces . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
199 205 210 214 216 217 219 220 220 220 221 223 223 225 227 228 229 230 230 230 231 233 234 237 240 241 245 246 247 249 249 250 250 252 253 254 257 258
| | | | | | | | | | |
vi
| |
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. 274 . 274
. . . . . . . . . . . . . . . . . . . 297
. . . . . . . . . . . . . . . . . . . . performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 300 305 305 306 308 309 309
Contents
vii
Comparing distinct types . . . . . Assigning distinct types . . . . . . Using distinct types in UNIONs . . . Invoking functions with distinct types . Combining distinct types with user-defined
. . . . . . . . . . . . functions
. . . . . . . . and
. . . . . . . . LOBs
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
viii
| |
Bind processes for DRDA and DB2 private protocol access . . . . . . . . . Precompiler and bind options for DRDA access . . . . . . . . . . . . . Coding methods for distributed data . . . . . . . . . . . . . . . . . Using three-part table names to access distributed data . . . . . . . . . . Using explicit CONNECT statements to access distributed data . . . . . . . Coordinating updates to two or more data sources . . . . . . . . . . . . . Working without two-phase commit . . . . . . . . . . . . . . . . . Update restrictions on servers that do not support two-phase commit . . . . . Forcing update restrictions by using CONNECT (Type 1) . . . . . . . . . . Maximizing performance for distributed data . . . . . . . . . . . . . . Coding efficient queries . . . . . . . . . . . . . . . . . . . . . Maximizing LOB performance in a distributed environment . . . . . . . . . Using bind options to improve performance for distributed applications . . . . Using block fetch in distributed applications . . . . . . . . . . . . . . Limiting the number of DRDA network transmissions . . . . . . . . . . . Limiting the number of rows returned to DRDA clients . . . . . . . . . . Working with distributed data . . . . . . . . . . . . . . . . . . . . SQL limitations at dissimilar servers . . . . . . . . . . . . . . . . . Executing long SQL statements in a distributed environment . . . . . . . . Retrieving data from ASCII or Unicode tables . . . . . . . . . . . . . Accessing data with a scrollable cursor when the requester is down-level . . . . Accessing data with a rowset-positioned cursor when the requester is down-level . Maintaining data currency by using cursors . . . . . . . . . . . . . . Copying a table from a remote location. . . . . . . . . . . . . . . . Transmitting mixed data. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
444 445 448 448 450 452 452 453 453 454 454 454 456 458 461 464 465 465 466 466 467 467 467 467 467
Contents
ix
Dynamic execution using EXECUTE IMMEDIATE . . . . . . . . Dynamic execution using PREPARE and EXECUTE . . . . . . . Dynamic execution of a multiple-row INSERT statement . . . . . . Using DESCRIBE INPUT to put parameter information in an SQLDA . Dynamic SQL for fixed-list SELECT statements . . . . . . . . . . Declaring a cursor for the statement name . . . . . . . . . . . Preparing the statement . . . . . . . . . . . . . . . . . Opening the cursor . . . . . . . . . . . . . . . . . . Fetching rows from the result table . . . . . . . . . . . . . Closing the cursor . . . . . . . . . . . . . . . . . . . Dynamic SQL for varying-list SELECT statements . . . . . . . . . What your application program must do . . . . . . . . . . . Preparing a varying-list SELECT statement . . . . . . . . . . Executing a varying-list SELECT statement dynamically . . . . . . Executing arbitrary statements with parameter markers . . . . . . How bind options REOPT(ALWAYS) and REOPT(ONCE) affect dynamic Using dynamic SQL in COBOL . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQL . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
604 605 608 610 610 611 611 612 612 612 613 613 613 623 624 625 627
Contents
xi
Running multiple stored procedures concurrently . . . . . . . . . . Multiple instances of a stored procedure . . . . . . . . . . . . . Accessing non-DB2 resources . . . . . . . . . . . . . . . . . Testing a stored procedure . . . . . . . . . . . . . . . . . . . Debugging the stored procedure as a stand-alone program on a workstation . Debugging with the Debug Tool and IBM VisualAge COBOL . . . . . . Debugging an SQL procedure or C language stored procedure with the Debug Tools for z/OS . . . . . . . . . . . . . . . . . . . . . . Debugging with Debug Tool for z/OS interactively and in batch mode . . . Using the MSGFILE run-time option . . . . . . . . . . . . . . Using driver applications . . . . . . . . . . . . . . . . . . Using SQL INSERT statements . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . Tool and . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C/C++ Productivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . .
722 722 723 725 725 725 726 727 728 729 729
| |
Obtaining PLAN_TABLE information from EXPLAIN . . . . . . . . . Creating PLAN_TABLE . . . . . . . . . . . . . . . . . . Populating and maintaining a plan table . . . . . . . . . . . . Reordering rows from a plan table . . . . . . . . . . . . . . Asking questions about data access . . . . . . . . . . . . . . . Is access through an index? (ACCESSTYPE is I, I1, N or MX) . . . . . Is access through more than one index? (ACCESSTYPE=M) . . . . . . How many columns of the index are used in matching? (MATCHCOLS=n) Is the query satisfied using only the index? (INDEXONLY=Y) . . . . . Is direct row access possible? (PRIMARY_ACCESSTYPE = D) . . . . . Is a view or nested table expression materialized? . . . . . . . . . Was a scan limited to certain partitions? (PAGE_RANGE=Y) . . . . . What kind of prefetching is expected? (PREFETCH = L, S, D, or blank) . . Is data accessed or processed in parallel? (PARALLELISM_MODE is I, C, or Are sorts performed? . . . . . . . . . . . . . . . . . . . Is a subquery transformed into a join? . . . . . . . . . . . . . When are aggregate functions evaluated? (COLUMN_FN_EVAL) . . . . How many index screening columns are used? . . . . . . . . . . Is a complex trigger WHEN clause used? (QBLOCKTYPE=TRIGGR) . . . Interpreting access to a single table . . . . . . . . . . . . . . . Table space scans (ACCESSTYPE=R PREFETCH=S) . . . . . . . . Index access paths . . . . . . . . . . . . . . . . . . . . UPDATE using an index . . . . . . . . . . . . . . . . . Interpreting access to two or more tables (join) . . . . . . . . . . . Definitions and examples of join operations . . . . . . . . . . . Nested loop join (METHOD=1) . . . . . . . . . . . . . . . Merge scan join (METHOD=2) . . . . . . . . . . . . . . . Hybrid join (METHOD=4) . . . . . . . . . . . . . . . . . Star join (JOIN_TYPE=S) . . . . . . . . . . . . . . . . . Interpreting data prefetch . . . . . . . . . . . . . . . . . . Sequential prefetch (PREFETCH=S) . . . . . . . . . . . . . . Dynamic prefetch (PREFETCH=D) . . . . . . . . . . . . . . List prefetch (PREFETCH=L) . . . . . . . . . . . . . . . . Sequential detection at execution time . . . . . . . . . . . . . Determining sort activity . . . . . . . . . . . . . . . . . . Sorts of data . . . . . . . . . . . . . . . . . . . . . . Sorts of RIDs . . . . . . . . . . . . . . . . . . . . . The effect of sorts on OPEN CURSOR . . . . . . . . . . . . . Processing for views and nested table expressions . . . . . . . . . . Merge . . . . . . . . . . . . . . . . . . . . . . . . Materialization . . . . . . . . . . . . . . . . . . . . . Using EXPLAIN to determine when materialization occurs . . . . . . Using EXPLAIN to determine UNION activity and query rewrite . . . . Performance of merge versus materialization . . . . . . . . . . . Estimating a statements cost . . . . . . . . . . . . . . . . . Creating a statement table . . . . . . . . . . . . . . . . . Populating and maintaining a statement table . . . . . . . . . . Retrieving rows from a statement table . . . . . . . . . . . . . The implications of cost categories . . . . . . . . . . . . . .
. . . . . . . . . . . . . X) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
790 791 798 799 800 801 801 802 802 803 806 806 807 807 807 808 808 808 809 809 809 810 815 815 815 818 820 821 823 830 830 831 831 832 834 834 835 835 836 836 837 839 840 842 842 843 845 845 846
Chapter 29. Programming for the Interactive System Productivity Facility . . . . . . 857
Contents
xiii
Using ISPF and the DSN command processor . . . . Invoking a single SQL program through ISPF and DSN . Invoking multiple SQL programs through ISPF and DSN . Invoking multiple SQL programs through ISPF and CAF .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Chapter 31. Programming for the Resource Recovery Services attachment facility . . 893
RRSAF capabilities and requirements . . . RRSAF capabilities . . . . . . . . RRSAF requirements . . . . . . . . How to use RRSAF . . . . . . . . . Summary of connection functions . . . Implicit connections . . . . . . . . Accessing the RRSAF language interface . General properties of RRSAF connections . Summary of RRSAF behavior . . . . . RRSAF function descriptions . . . . . . Register conventions . . . . . . . . Parameter conventions for function calls . IDENTIFY: Syntax and usage . . . . . SWITCH TO: Syntax and usage . . . . SIGNON: Syntax and usage . . . . . AUTH SIGNON: Syntax and usage . . . CONTEXT SIGNON: Syntax and usage. . SET_ID: Syntax and usage . . . . . . SET_CLIENT_ID: Syntax and usage . . . CREATE THREAD: Syntax and usage . . TERMINATE THREAD: Syntax and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893 893 894 896 896 897 898 900 902 903 903 904 904 907 909 914 918 922 923 926 928
xiv
TERMINATE IDENTIFY: Syntax and usage . . . TRANSLATE: Syntax and usage . . . . . . . RRSAF connection examples . . . . . . . . . Example of a single task . . . . . . . . . . Example of multiple tasks . . . . . . . . . Example of calling SIGNON to reuse a DB2 thread . Example of switching DB2 threads between tasks . RRSAF return codes and reason codes . . . . . . Program examples for RRSAF . . . . . . . . . Sample JCL for using RRSAF . . . . . . . . Loading and deleting the RRSAF language interface Using dummy entry point DSNHLI for RRSAF . . Connecting to DB2 for RRSAF . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
930 931 932 932 933 933 933 934 935 935 935 935 936
| | | # | # # | # # # # # # #
. . . . . . . . . 977
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 977 977 978 979 980
Contents
xv
. . . . . . . . . . . 1019
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020 . 1024 . 1026
xvi
Reserved words .
. 1109
Appendix I. Program preparation options for remote packages . . . . . . . . . . 1123 Appendix J. DB2-supplied stored procedures . . . . . . . . . . . . . . . . . . 1127
WLM environment refresh stored procedure (WLM_REFRESH) . . . . . . . . . . . . . . . . Environment for WLM_REFRESH . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for WLM_REFRESH . . . . . . . . . . . . . . . . . . . . . . WLM_REFRESH syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . WLM_REFRESH option descriptions . . . . . . . . . . . . . . . . . . . . . . . . Example of WLM_REFRESH invocation . . . . . . . . . . . . . . . . . . . . . . . The CICS transaction invocation stored procedure (DSNACICS) . . . . . . . . . . . . . . . . Environment for DSNACICS . . . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DSNACICS . . . . . . . . . . . . . . . . . . . . . . . DSNACICS syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . DSNACICS option descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . DSNACICX user exit routine . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of DSNACICS invocation . . . . . . . . . . . . . . . . . . . . . . . . . DSNACICS output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DSNACICS restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DSNACICS debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IMS transactions stored procedure (DSNAIMS) . . . . . . . . . . . . . . . . . . . . . . Environment for DSNAIMS . . . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DSNAIMS . . . . . . . . . . . . . . . . . . . . . . . . DSNAIMS syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . DSNAIMS option descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . Examples of DSNAIMS invocation . . . . . . . . . . . . . . . . . . . . . . . . . Connecting to multiple IMS subsystems with DSNAIMS . . . . . . . . . . . . . . . . . The DB2 EXPLAIN stored procedure . . . . . . . . . . . . . . . . . . . . . . . . . Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DSNAEXP syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . DSNAEXP option descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . Example of DSNAEXP invocation . . . . . . . . . . . . . . . . . . . . . . . . . DSNAEXP output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Store an XML document from an MQ message queue in DB2 tables (DXXMQINSERT) . . . . Environment for DXXMQINSERT . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQINSERT . . . . . . . . . . . . . . . . . . . . . . DXXMQINSERT syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . DXXMQINSERT option descriptions . . . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQINSERT invocation . . . . . . . . . . . . . . . . . . . . . . . DXXMQINSERT output . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Store an XML document from an MQ message queue in DB2 tables (DXXMQSHRED) . . . . Environment for DXXMQSHRED . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQSHRED . . . . . . . . . . . . . . . . . . . . . . DXXMQSHRED syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . DXXMQSHRED option descriptions . . . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQSHRED invocation . . . . . . . . . . . . . . . . . . . . . . . DXXMQSHRED output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Store a large XML document from an MQ message queue in DB2 tables (DXXMQINSERTCLOB) Environment for DXXMQINSERTCLOB . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQINSERTCLOB . . . . . . . . . . . . . . . . . . . . DXXMQINSERTCLOB syntax diagram . . . . . . . . . . . . . . . . . . . . . . . DXXMQINSERTCLOB option descriptions . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQINSERTCLOB invocation . . . . . . . . . . . . . . . . . . . . .
Contents
# # # # # # # # # # # # # # #
# # # # # # # # # # # # #
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1129 1129 1129 1130 1130 1131 1131 1132 1132 1132 1133 1135 1137 1138 1139 1139 1139 1139 1139 1139 1140 1142 1143 1143 1143 1144 1144 1144 1145 1146 1146 1146 1146 1147 1147 1147 1148 1148 1149 1149 1149 1149 1150 1151 1151 1151 1151 1151 1152 1152
xvii
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
DXXMQINSERTCLOB output . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Store a large XML document from an MQ message queue in DB2 tables (DXXMQSHREDCLOB) Environment for DXXMQSHREDCLOB . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQSHREDCLOB . . . . . . . . . . . . . . . . . . . . . DXXMQSHREDCLOB syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . DXXMQSHREDCLOB option descriptions . . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQSHREDCLOB invocation . . . . . . . . . . . . . . . . . . . . . . DXXMQSHREDCLOB output. . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Store XML documents from an MQ message queue in DB2 tables (DXXMQINSERTALL) . . . . Environment for DXXMQINSERTALL . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQINSERTALL . . . . . . . . . . . . . . . . . . . . . DXXMQINSERTALL syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . DXXMQINSERTALL option descriptions . . . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQINSERTALL invocation . . . . . . . . . . . . . . . . . . . . . . . DXXMQINSERTALL output . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Store XML documents from an MQ message queue in DB2 tables (DXXMQSHREDALL) . . . . Environment for DXXMQSHREDALL . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQSHREDALL . . . . . . . . . . . . . . . . . . . . . DXXMQSHREDALL syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . DXXMQSHREDALL option descriptions . . . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQSHREDALL invocation . . . . . . . . . . . . . . . . . . . . . . . DXXMQSHREDALL output . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Store large XML documents from an MQ message queue in DB2 tables (DXXMQSHREDALLCLOB) Environment for DXXMQSHREDALLCLOB . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQSHREDALLCLOB . . . . . . . . . . . . . . . . . . . DXXMQSHREDALLCLOB syntax diagram . . . . . . . . . . . . . . . . . . . . . . . DXXMQSHREDALLCLOB option descriptions . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQSHREDALLCLOB invocation . . . . . . . . . . . . . . . . . . . . . DXXMQSHREDALLCLOB output . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Store large XML documents from an MQ message queue in DB2 tables (DXXMQINSERTALLCLOB) Environment for DXXMQINSERTALLCLOB . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQINSERTALLCLOB . . . . . . . . . . . . . . . . . . . DXXMQINSERTALLCLOB syntax diagram . . . . . . . . . . . . . . . . . . . . . . . DXXMQINSERTALLCLOB option descriptions . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQINSERTALLCLOB invocation . . . . . . . . . . . . . . . . . . . . . DXXMQINSERTALLCLOB output . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Send XML documents to an MQ message queue (DXXMQGEN) . . . . . . . . . . . . . Environment for DXXMQGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQGEN . . . . . . . . . . . . . . . . . . . . . . . . DXXMQGEN syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . DXXMQGEN option descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQGEN invocation . . . . . . . . . . . . . . . . . . . . . . . . . DXXMQGEN output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Send XML documents to an MQ message queue (DXXMQRETRIEVE) . . . . . . . . . . . Environment for DXXMQRETRIEVE . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQRETRIEVE . . . . . . . . . . . . . . . . . . . . . . DXXMQRETRIEVE syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . DXXMQRETRIEVE option descriptions . . . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQRETRIEVE invocation . . . . . . . . . . . . . . . . . . . . . . . DXXMQRETRIEVE output . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Send large XML documents to an MQ message queue (DXXMQGENCLOB) . . . . . . . . . Environment for DXXMQGENCLOB . . . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQGENCLOB . . . . . . . . . . . . . . . . . . . . . . DXXMQGENCLOB syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . DXXMQGENCLOB option descriptions . . . . . . . . . . . . . . . . . . . . . . . . Example of DXXMQGENCLOB invocation . . . . . . . . . . . . . . . . . . . . . . . DXXMQGENCLOB output . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deprecated: Send XML documents to an MQ message queue (DXXMQRETRIEVECLOB) . . . . . . . . . Environment for DXXMQRETRIEVECLOB . . . . . . . . . . . . . . . . . . . . . . . Authorization required for DXXMQRETRIEVECLOB . . . . . . . . . . . . . . . . . . . . DXXMQRETRIEVECLOB syntax diagram . . . . . . . . . . . . . . . . . . . . . . .
1153 1153 1154 1154 1154 1154 1155 1156 1156 1156 1156 1156 1157 1157 1158 1158 1158 1159 1159 1159 1160 1160 1161 1161 1161 1161 1161 1162 1163 1163 1163 1163 1164 1164 1165 1165 1165 1166 1166 1166 1166 1168 1169 1169 1169 1169 1169 1170 1171 1173 1173 1173 1173 1173 1174 1175 1176 1176 1176 1177 1177
xviii
# # #
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
Contents
xix
xx
Important In this version of DB2 UDB for z/OS, the DB2 Utilities Suite is available as an optional product. You must separately order and purchase a license to such utilities, and discussion of those utility functions in this publication is not intended to otherwise imply that you have a license to them. See Part 1 of DB2 Utility Guide and Reference for packaging details. The DB2 Utilities Suite is designed to work with the DFSORT program, which you are licensed to use in support of the DB2 utilities even if you do not otherwise license DFSORT for general use. If your primary sort product is not DFSORT, consider the following informational APARs mandatory reading: v II14047/II14213: USE OF DFSORT BY DB2 UTILITIES v II13495: HOW DFSORT TAKES ADVANTAGE OF 64-BIT REAL ARCHITECTURE These informational APARs are periodically updated.
xxi
# # # # # #
OMEGAMON Refers to any of the following products: v IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS v IBM Tivoli OMEGAMON XE for DB2 Performance Monitor on z/OS v IBM DB2 Performance Expert for Multiplatforms and Workgroups v IBM DB2 Buffer Pool Analyzer for z/OS C, C++, and C language Represent the C or C++ programming language. CICS Represents CICS Transaction Server for z/OS or CICS Transaction Server for OS/390. IMS Represents the IMS Database Manager or IMS Transaction Manager. MVS Represents the MVS element of the z/OS operating system, which is equivalent to the Base Control Program (BCP) component of the z/OS operating system. RACF Represents the functions that are provided by the RACF component of the z/OS Security Server.
If an optional item appears above the main path, that item has no effect on the execution of the statement and is used only for readability.
optional_item required_item
v If you can choose from two or more items, they appear vertically, in a stack. If you must choose one of the items, one item of the stack appears on the main path.
required_item required_choice1 required_choice2
xxii
If choosing one of the items is optional, the entire stack appears below the main path.
required_item optional_choice1 optional_choice2
If one of the items is the default, it appears above the main path and the remaining choices are shown below.
default_choice required_item optional_choice optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be repeated.
required_item
repeatable_item
If the repeat arrow contains a comma, you must separate repeated items with a comma.
, required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the stack. v Keywords appear in uppercase (for example, FROM). They must be spelled exactly as shown. Variables appear in all lowercase letters (for example, column-name). They represent user-supplied names or values. v If punctuation marks, parentheses, arithmetic operators, or other such symbols are shown, you must enter them as part of the syntax.
Accessibility
Accessibility features help a user who has a physical disability, such as restricted mobility or limited vision, to use software products. The major accessibility features in z/OS products, including DB2 UDB for z/OS, enable users to: v Use assistive technologies such as screen reader and screen magnifier software v Operate specific or equivalent features by using only a keyboard v Customize display attributes such as color, contrast, and font size Assistive technology products, such as screen readers, function with the DB2 UDB for z/OS user interfaces. Consult the documentation for the assistive technology products for specific information when you use assistive technology to access these interfaces. Online documentation for Version 8 of DB2 UDB for z/OS is available in the Information management software for z/OS solutions information center, which is an accessible format when used with assistive technologies such as screen reader
About this book
xxiii
or screen magnifier software. The Information management software for z/OS solutions information center is available at the following Web site: http://publib.boulder.ibm.com/infocenter/dzichelp
xxiv
xxv
| | | | | | | | | | | | | | | |
INSERT statement. This chapter also includes information about how bind option REOPT(ONCE) affects dynamic SQL statements. Chapter 25, Using stored procedures for client/server processing, on page 629 describes how to invoke DSNTPSMP (the SQL Procedure Processor that prepares SQL procedures for execution) with the SQL CALL statement. This chapter also describes new SQL procedure statements and describes how to run multiple instances of the same stored procedure at the same time. Chapter 31, Programming for the Resource Recovery Services attachment facility, on page 893 contains information about using implicit connections to DB2 when applications include SQL statements. Chapter 33, WebSphere MQ with DB2, on page 941 is a new chapter that describes how to use DB2 WebSphere MQ functions in SQL statements to combine DB2 database access with WebSphere MQ message handling. Appendix E, Recursive common table expression examples, on page 1097 is a new appendix that includes examples of using common table expressions to create recursive SQL in a bill of materials application.
xxvi
| | | | |
| | | | |
Chapter 2. Working with tables and modifying data . . . . . . . . . . . . . . . . . . . . . 19 Working with tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Creating your own tables: CREATE TABLE . . . . . . . . . . . . . . . . . . . . . . . . 19 Identifying defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Creating work tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Creating a new department table . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Creating a new employee table . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Working with temporary tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Working with created temporary tables . . . . . . . . . . . . . . . . . . . . . . . . 22 Working with declared temporary tables. . . . . . . . . . . . . . . . . . . . . . . . 23 Dropping tables: DROP TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Working with views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Defining a view: CREATE VIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Changing data through a view . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Dropping views: DROP VIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Modifying DB2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Inserting rows: INSERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Inserting a single row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Inserting rows into a table from another table . . . . . . . . . . . . . . . . . . . . . . 29 Other ways to insert data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Inserting data into a ROWID column . . . . . . . . . . . . . . . . . . . . . . . . . 30 Inserting data into an identity column . . . . . . . . . . . . . . . . . . . . . . . . 30 Selecting values as you insert: SELECT FROM INSERT . . . . . . . . . . . . . . . . . . . . 31 Result table of the INSERT operation . . . . . . . . . . . . . . . . . . . . . . . . . 32 Selecting values when you insert a single row . . . . . . . . . . . . . . . . . . . . . . 32 Selecting values when you insert data into a view . . . . . . . . . . . . . . . . . . . . 33 Selecting values when you insert multiple rows . . . . . . . . . . . . . . . . . . . . . 33
Copyright IBM Corp. 1983, 2008
| |
Result table of the cursor when you What happens if an error occurs . Updating current values: UPDATE . Deleting rows: DELETE . . . . . Deleting every row in a table . .
insert . . . . . . . .
multiple . . . . . . . . . . . .
rows . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
34 35 36 37 38
Chapter 3. Joining data from more than one table . . . . . . Inner join . . . . . . . . . . . . . . . . . . . . . Full outer join . . . . . . . . . . . . . . . . . . . Left outer join . . . . . . . . . . . . . . . . . . . Right outer join . . . . . . . . . . . . . . . . . . . SQL rules for statements containing join operations . . . . . . . Using more than one join in an SQL statement . . . . . . . . Using nested table expressions and user-defined table functions in joins Using correlated references in table specifications in joins . . . . . Chapter 4. Using subqueries . . . . . . . . . . Conceptual overview . . . . . . . . . . . . . Correlated and uncorrelated subqueries . . . . . . Subqueries and predicates . . . . . . . . . . The subquery result table . . . . . . . . . . . Tables in subqueries of UPDATE, DELETE, and INSERT How to code a subquery . . . . . . . . . . . . Basic predicate . . . . . . . . . . . . . . Quantified predicate : ALL, ANY, or SOME . . . . . Using the ALL predicate . . . . . . . . . . Using the ANY or SOME predicate . . . . . . IN keyword . . . . . . . . . . . . . . . EXISTS keyword . . . . . . . . . . . . . Using correlated subqueries . . . . . . . . . . . An example of a correlated subquery . . . . . . . Using correlation names in references . . . . . . Using correlated subqueries in an UPDATE statement . Using correlated subqueries in a DELETE statement . . Using tables with no referential constraints . . . . Using a single table. . . . . . . . . . . . Using tables with referential constraints . . . . .
. . . . . . . . . . . . . . . 39 . . . . . . . . . . . . . . . 40 . . . . . . . . . . . . . . . 41 . . . . . . . . . . . . . . . 42 . . . . . . . . . . . . . . . 43 . . . . . . . . . . . . . . . 44 . . . . . . . . . . . . . . . 45 . . . . . . . . . . . . . . . 46 . . . . . . . . . . . . . . . 47
. . . . . . . . . . . . . . . . . . . 49 . . . . . . . . . . . . . . . . . . . 49 . . . . . . . . . . . . . . . . . . . 50 . . . . . . . . . . . . . . . . . . . 50 . . . . . . . . . . . . . . . . . . . 50 statements . . . . . . . . . . . . . . . 51 . . . . . . . . . . . . . . . . . . . 51 . . . . . . . . . . . . . . . . . . . 51 . . . . . . . . . . . . . . . . . . . 51 . . . . . . . . . . . . . . . . . . . 52 . . . . . . . . . . . . . . . . . . . 52 . . . . . . . . . . . . . . . . . . . 52 . . . . . . . . . . . . . . . . . . . 53 . . . . . . . . . . . . . . . . . . . 53 . . . . . . . . . . . . . . . . . . . 53 . . . . . . . . . . . . . . . . . . . 54 . . . . . . . . . . . . . . . . . . . 55 . . . . . . . . . . . . . . . . . . . 56 . . . . . . . . . . . . . . . . . . . 56 . . . . . . . . . . . . . . . . . . . 56 . . . . . . . . . . . . . . . . . . . 56
Chapter 5. Using SPUFI to execute SQL from your workstation . . . . . . . . . . . . . . . . . 59 Allocating an input data set and using SPUFI . . . . . . . . . . . . . . . . . . . . . . . . 59 Changing SPUFI defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 # Changing SPUFI defaults - panel 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Entering SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Using the ISPF editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Retrieving Unicode UTF-16 graphic data . . . . . . . . . . . . . . . . . . . . . . . . 67 # Entering comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Setting the SQL terminator character . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Controlling toleration of warnings . . . . . . . . . . . . . . . . . . . . . . . . . . 68 # Processing SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 When SQL statements exceed resource limit thresholds . . . . . . . . . . . . . . . . . . . . . 68 Browsing the output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Format of SELECT statement results . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Content of the messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Result tables
The data retrieved through SQL is always in the form of a table, which is called a result table. Like the tables from which you retrieve the data, a result table has rows and columns. A program fetches this data one row at a time. Example: SELECT statement: The following SELECT statement retrieves the last name, first name, and phone number of employees in department D11 from the sample employee table:
SELECT LASTNAME, FIRSTNME, PHONENO FROM DSN8810.EMP WHERE WORKDEPT = D11 ORDER BY LASTNAME;
Data types
When you create a DB2 table, you define each column to have a specific data type. The data type can be a built-in data type or a distinct type. This section discusses built-in data types. For information about distinct types, see Chapter 16, Creating and using distinct types, on page 367. The data type of a column determines what you can and cannot do with the column. When you perform operations on columns, the data must be compatible with the data type of the referenced column. For example, you cannot insert character data, like a last name, into a column whose data type is numeric. Similarly, you cannot compare columns containing incompatible data types. To better understand the concepts that are presented in this chapter, you must understand the data types of the columns to which an example refers. As shown in Figure 1, built-in data types have four general categories: datetime, string, numeric, and row identifier (ROWID).
built-in data types
datetime
string
signed numeric
time TIME
timestamp
date
exact
approximate
TIMESTAMP DATE varying length binary BLOB varying length floating point
character
graphic
varying length
VARCHAR CLOB
decimal
16 bit
32 bit
packed DECIMAL
SMALLINT INTEGER
For more detailed information about each data type, see Chapter 2 of DB2 SQL Reference. Table 1 on page 5 shows whether operands of any two data types are compatible, Y (Yes), or incompatible, N (No). Numbers in the table, either as superscript of Y or N, or as a value in the column, indicates a note at the bottom of the table.
Table 1. Compatibility of data types for assignments and comparisons. Y indicates that the data types are compatible. N indicates no compatibility. For any number in a column, read the corresponding note at the bottom of the table. Operands Binary Integer Decimal Number Floating Point Character String Binary Decimal Floating Character integer number point string Y Y Y N Y Y Y N N N N N N N 2 Y Y Y N N N N N N N 2 N N N Y Y4,5 N 1 1 1 N 2
3
Time N N N 1 1,4 N N Y N N 2
Timestamp N N N 1 1,4 N N N Y N 2
Row ID N N N N N N N N N Y 2
Distinct type 2 2 2 2 2 2 2 2 2 2 Y2
N N N N N Y N N N N 2
3
N N N 1 1,4 N Y N N N 2
Graphic String N Binary String Date Time Timestamp Row ID Distinct Type Notes: N N N N N 2
1. The compatibility of datetime values is limited to assignment and comparison: v Datetime values can be assigned to string columns and to string variables, as explained in Chapter 2 of DB2 SQL Reference. v A valid string representation of a date can be assigned to a date column or compared to a date. v A valid string representation of a time can be assigned to a time column or compared to a time. v A valid string representation of a timestamp can be assigned to a timestamp column or compared to a timestamp. 2. A value with a distinct type is comparable only to a value that is defined with the same distinct type. In general, DB2 supports assignments between a distinct type value and its source data type. For additional information, see Chapter 2 of DB2 SQL Reference. 3. All character strings, even those with subtype FOR BIT DATA, are not compatible with binary strings. 4. On assignment and comparison from Graphic to Character, the resulting length in bytes is 3 * (LENGTH(graphic string)), depending on the CCSIDs. 5. Character strings with subtype FOR BIT DATA are not compatible with Graphic Data.
DEPTNO ====== A00 B01 C01 D01 D11 D21 E01 E11 E21 F22 G22 H22 I22 J22
DEPTNAME ============================== SPIFFY COMPUTER SERVICES DIV. PLANNING INFORMATION CENTER DEVELOPMENT CENTER MANUFACTURING CENTER ADMINSTRATION SYSTEMS SUPPORT SERVICES OPERATIONS SOFTWARE SUPPORT BRANCH OFFICE F2 BRANCH OFFICE G2 BRANCH OFFICE H2 BRANCH OFFICE I2 BRANCH OFFICE J2
MGRNO ====== 000010 000020 000030 -----000060 000070 000050 000090 000100 --------------------------
ADMRDEPT ======== A00 A00 A00 A00 D01 D01 A00 E01 E01 E01 E01 E01 E01 E01
Because the example does not specify a WHERE clause, the statement retrieves data from all rows. The dashes for MGRNO and LOCATION in the result table indicate null values. SELECT * is recommended mostly for use with dynamic SQL and view definitions. You can use SELECT * in static SQL, but this is not recommended; if you add a column to the table to which SELECT * refers, the program might reference columns for which you have not defined receiving host variables. For more information about host variables, see Accessing data using host variables, variable arrays, and structures on page 79. If you list the column names in a static SELECT statement instead of using an asterisk, you can avoid the problem created by using SELECT *. You can also see the relationship between the receiving host variables and the columns in the result table.
With a single SELECT statement, you can select data from one column or as many as 750 columns.
Derived columns in a result table, such as (SALARY + BONUS + COMM), do not have names. You can use the AS clause to give a name to an unnamed column of the result table. For information about using the AS clause, see Naming result columns: AS. To order the rows in a result table by the values in a derived column, specify a name for the column by using the AS clause, and specify that name in the ORDER BY clause. For information about using the ORDER BY clause, see Putting the rows in order: ORDER BY on page 9.
Example: CREATE VIEW with AS clause: You can specify result column names in the select-clause of a CREATE VIEW statement. You do not need to supply the
column list of CREATE VIEW, because the AS keyword names the derived column. The columns in the view EMP_SAL are EMPNO and TOTAL_SAL.
CREATE VIEW EMP_SAL AS SELECT EMPNO,SALARY+BONUS+COMM AS TOTAL_SAL FROM DSN8810.EMP;
| |
For more information about using the CREATE VIEW statement, see Defining a view: CREATE VIEW on page 25. Example: UNION ALL with AS clause: You can use the AS clause to give the same name to corresponding columns of tables in a union. The third result column from the union of the two tables has the name TOTAL_VALUE, even though it contains data derived from columns with different names:
SELECT On hand AS STATUS, PARTNO, QOH * COST AS TOTAL_VALUE FROM PART_ON_HAND UNION ALL SELECT Ordered AS STATUS, PARTNO, QORDER * COST AS TOTAL_VALUE FROM ORDER_PART ORDER BY PARTNO, TOTAL_VALUE;
The column STATUS and the derived column TOTAL_VALUE have the same name in the first and second result tables, and are combined in the union of the two result tables, which is similar to the following partial output:
STATUS ======= On hand Ordered . . . PARTNO ====== 00557 00557 TOTAL_VALUE =========== 345.60 150.50
For information about unions, see Merging lists of values: UNION on page 12. Example: GROUP BY derived column: You can use the AS clause in a FROM clause to assign a name to a derived column that you want to refer to in a GROUP BY clause. This SQL statement names HIREYEAR in the nested table expression, which lets you use the name of that result column in the GROUP BY clause:
SELECT HIREYEAR, AVG(SALARY) FROM (SELECT YEAR(HIREDATE) AS HIREYEAR, SALARY FROM DSN8810.EMP) AS NEWEMP GROUP BY HIREYEAR;
| | |
You cannot use GROUP BY with a name that is defined with an AS clause for the derived column YEAR(HIREDATE) in the outer SELECT, because that name does not exist when the GROUP BY runs. However, you can use GROUP BY with a name that is defined with an AS clause in the nested table expression, because the nested table expression runs before the GROUP BY that references the name. For more information about using the GROUP BY clause, see Summarizing group values: GROUP BY on page 11.
If a search condition contains a column of a distinct type, the value to which that column is compared must be of the same distinct type, or you must cast the value to the distinct type. See Chapter 16, Creating and using distinct types, on page 367 for more information about distinct types. Table 2 lists the type of comparison, the comparison operators, and an example of how each type of comparison that you can use in a predicate in a WHERE clause.
Table 2. Comparison operators used in conditions Type of comparison Equal to Not equal to Less than Less than or equal to Not less than Greater than Comparison operator = <> < <= >= > Example DEPTNO = X01 DEPTNO <> X01 AVG(SALARY) < 30000 AGE <= 25 AGE >= 21 SALARY > 2000 SALARY >= 5000 SALARY <= 5000 PHONENO IS NULL PHONENO IS DISTINCT FROM :PHONEHV NAME LIKE %SMITH% or STATUS LIKE N_ HIREDATE < 1965-01-01 OR SALARY < 16000 HIREDATE < 1965-01-01 AND SALARY < 16000 SALARY BETWEEN 20000 AND 40000 DEPTNO IN (B01, C01, D01)
Greater than or equal to >= Not greater than Equal to null <= IS NULL IS DISTINCT FROM
| |
Similar to another value LIKE At least one of two conditions Both of two conditions Between two values Equals a value in a set OR AND BETWEEN IN (X, Y, Z)
Note: SALARY BETWEEN 20000 AND 40000 is equivalent to SALARY >= 20000 AND SALARY <= 40000. For more information about predicates, see Chapter 2 of DB2 SQL Reference.
You can also search for rows that do not satisfy one of the preceding conditions by using the NOT keyword before the specified condition. | | | | | | You can search for rows that do not satisfy the IS DISTINCT FROM predicate by using either of the following predicates: v value IS NOT DISTINCT FROM value v NOT(value IS DISTINCT FROM value) Both of these forms of the predicate create an expression where one value is equal to another value or both values are equal to null.
Example: ORDER BY clause with an expression as the sort key: The following subselect retrieves the employee numbers, salaries, commissions, and total compensation (salary plus commission) for employees with a total compensation greater than 40000. Order the results by total compensation:
SELECT EMPNO, SALARY, COMM, SALARY+COMM AS "TOTAL COMP" FROM DSN8810.EMP WHERE SALARY+COMM > 40000 ORDER BY SALARY+COMM;
10
If a column that you specify in the GROUP BY clause contains null values, DB2 considers those null values to be equal. Thus, all nulls form a single group. When it is used, the GROUP BY clause follows the FROM clause and any WHERE clause, and precedes the ORDER BY clause. You can group the rows by the values of more than one column. Example: GROUP BY clause using more than one column: The following statement finds the average salary for men and women in departments A00 and C01:
SELECT WORKDEPT, SEX, AVG(SALARY) AS AVG_SALARY FROM DSN8810.EMP WHERE WORKDEPT IN (A00, C01) GROUP BY WORKDEPT, SEX;
DB2 groups the rows first by department number and then (within each department) by sex before it derives the average SALARY value for each group. | # # # # # # You can also group the rows by the results of an expression Example: GROUP BY clause using a expression: The following statement groups departments by their leading characters, and lists the lowest and highest education level for each group:
SELECT SUBSTR(WORKDEPT,1,1), MIN(EDLEVEL), MAX(EDLEVEL) FROM DSN8810.EMP GROUP BY SUBSTR(WORKDEPT,1,1);
11
Example: HAVING clause: The following SQL statement includes a HAVING clause that specifies a search condition for groups of work departments in the employee table:
SELECT WORKDEPT, AVG(SALARY) AS AVG_SALARY FROM DSN8810.EMP GROUP BY WORKDEPT HAVING COUNT(*) > 1 ORDER BY WORKDEPT;
Compare the preceding example with the second example shown in Summarizing group values: GROUP BY on page 11. The clause, HAVING COUNT(*) > 1, ensures that only departments with more than one member are displayed. In this case, departments B01 and E01 do not display because the HAVING clause tests a property of the group. Example: HAVING clause used with a GROUP BY clause: Use the HAVING clause to retrieve the average salary and minimum education level of women in each department for which all female employees have an education level greater than or equal to 16. Assuming you only want results from departments A00 and D11, the following SQL statement tests the group property, MIN(EDLEVEL):
SELECT WORKDEPT, AVG(SALARY) AS AVG_SALARY, MIN(EDLEVEL) AS MIN_EDLEVEL FROM DSN8810.EMP WHERE SEX = F AND WORKDEPT IN (A00, D11) GROUP BY WORKDEPT HAVING MIN(EDLEVEL) >= 16;
When you specify both GROUP BY and HAVING, the HAVING clause must follow the GROUP BY clause. A function in a HAVING clause can include DISTINCT if you have not used DISTINCT anywhere else in the same SELECT statement. You can also connect multiple predicates in a HAVING clause with AND and OR, and you can use NOT for any predicate of a search condition.
12
# # #
unnamed. When you use the UNION statement, the SQLNAME field of the SQLDA contains the unqualified name or label of the column, or a string of length zero if the name or label does not exist.
If you have an ORDER BY clause, it must appear after the last SELECT statement that is part of the union. In this example, the first column of the final result table determines the final order of the rows.
| | | | | | |
13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
derive their result tables for each reference, all references to common table expressions in a given statement share the same result table. A common table expression can be used in the following situations: v When you want to avoid creating a view (when general use of the view is not required and positioned updates or deletes are not used) v When the desired result table is based on host variables v When the same result table needs to be shared in a fullselect v When the results need to be derived using recursion
The result table for the common table expression, DTOTAL, contains the department number and total pay for each department in the employee table. The fullselect in the previous example uses the result table for DTOTAL to find the department with the highest total pay. The result table for the entire statement looks similar to the following results:
DEPTNO ====== D11
14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
FROM DSN8810.EMP GROUP BY deptno) SELECT deptno FROM DTOTAL WHERE totalpay > (SELECT AVG(totalpay) FROM DTOTAL);
The fullselect in the previous example uses the result table for DTOTAL to find the departments that have a greater than average total pay. The result table is saved as the RICH_DEPT view and looks similar to the following results:
DEPTNO ====== A00 D11 D21
The fullselect in the previous example uses the result table for VITALDEPT to find the managers number for departments that have a greater than average number of senior engineers. The managers number is then inserted into the vital_mgr table.
15
| | | | | | | | | | | | | | | |
v The data types, lengths, and CCSIDs of the column names from the common table expression that are referenced in the iterative fullselect must match v The UNION statements must be UNION ALL v Outer joins must not be part of any recursion cycle v Subquery must not be part on any recursion cycle It is possible to introduce an infinite loop when developing a recursive common table expression. A recursive common table expression is expected to include a predicate that will prevent an infinite loop. A warning is issued if one of the following is not found in the iterative fullselect of a recursive common table expression: v An integer column that increments by a constant v A predicate in the WHERE clause in the form of counter_column < constant or counter_column < :host variable See Appendix E, Recursive common table expression examples, on page 1097 for examples of bill of materials applications that use recursive common table expressions.
v Use the VALUES INTO statement to return the value of an expression in a host variable.
EXEC SQL VALUES RAND(:hvrand) INTO :hvrandval;
v Select the expression from the DB2-provided EBCDIC table, named SYSIBM.SYSDUMMY1, which consists of one row.
EXEC SQL SELECT RAND(:hvrand) INTO :hvrandval FROM SYSIBM.SYSDUMMY1;
16
| |
- The installation option for DECIMAL ARITHMETIC on panel DSNTIP4 is DEC31, D31.s, or 31; the installation option for USE FOR DYNAMICRULES on panel DSNTIP4 is YES; and the value of CURRENT PRECISION has not been set by the application. - The SQL statement has bind, define, or invoke behavior; the statement is in an application precompiled with option DEC(31); the installation option for USE FOR DYNAMICRULES on panel DSNTIP4 is NO; and the value of CURRENT PRECISION has not been set by the application. See Using DYNAMICRULES to specify behavior of dynamic SQL statements on page 502 for an explanation of bind, define, and invoke behavior. The operation is in an embedded (static) SQL statement that you precompiled with the DEC(31), DEC31, or D31.s option, or with the default for that option when the install option DECIMAL ARITHMETIC is DEC31 or 31. s is a number between one and nine and represents the minimum scale to be used for division operations. See Step 1: Process SQL statements on page 473 for information about precompiling and for a list of all precompiler options. Recommendation: Choose DEC31 or D31.s to reduce the chance of overflow, or when dealing with a precision greater than 15 digits. s is a number between one and nine and represents the minimum scale to be used for division operations. Avoiding decimal arithmetic errors: For static SQL statements, the simplest way to avoid a division error is to override DEC31 rules by specifying the precompiler option DEC(15). In some cases you can avoid a division error by specifying D31.s. This specification reduces the probability of errors for statements that are embedded in the program. s is a number between one and nine and represents the minimum scale to be used for division operations. If the dynamic SQL statements have bind, define, or invoke behavior and the value of the installation option for USE FOR DYNAMICRULES on panel DSNTIP4 is NO, you can use the precompiler option DEC(15), DEC15, or D15.s to override DEC31 rules. For a dynamic statement, or for a single static statement, use the scalar function DECIMAL to specify values of the precision and scale for a result that causes no errors. Before you execute a dynamic statement, set the value of special register CURRENT PRECISION to DEC15 or D15.s. Even if you use DEC31 rules, multiplication operations can sometimes cause overflow because the precision of the product is greater than 31. To avoid overflow from multiplication of large numbers, use the MULTIPLY_ALT built-in function instead of the multiplication operator.
17
In this query, the predicate GRANTEETYPE = selects authorization IDs. If your DB2 subsystem uses an exit routine for access control authorization, you cannot rely on catalog queries to tell you the tables you can access. When such an exit routine is installed, both RACF and DB2 control table access.
If you display column information about a table that includes LOB or ROWID columns, the LENGTH field for those columns contains the number of bytes those column occupy in the base table, rather than the length of the LOB or ROWID data. To determine the maximum length of data for a LOB or ROWID column, include the LENGTH2 column in your query, as in the following example:
SELECT NAME, COLTYPE, LENGTH, LENGTH2 FROM SYSIBM.SYSCOLUMNS WHERE TBNAME = EMP_PHOTO_RESUME AND TBCREATOR = DSN8810;
18
The preceding CREATE statement has the following elements: v CREATE TABLE, which names the table PRODUCT. v A list of the columns that make up the table. For each column, specify the following information: The columns name (for example, SERIAL). The data type and length attribute (for example, CHAR(8)). For more information about data types, see Data types on page 4. Optionally, a default value. See Identifying defaults. Optionally, a referential constraint or check constraint. See Using referential constraints on page 261 and Using check constraints on page 259. You must separate each column description from the next with a comma, and enclose the entire list of column descriptions in parentheses.
Identifying defaults
If you want to constrain the input or identify the default of a column, you can use the following values:
Copyright IBM Corp. 1983, 2008
19
v NOT NULL, when the column cannot contain null values. v UNIQUE, when the value for each row must be unique, and the column cannot contain null values. v DEFAULT, when the column has one of the following DB2-assigned defaults: For numeric columns, zero is the default value. For fixed-length strings, blank is the default value. For variable-length strings, including LOB strings, the empty string (string of zero-length) is the default value. For datetime columns, the current value of the associated special register is the default value. v DEFAULT value, when you want to identify one of the following values as the default value: A constant NULL USER, which specifies the value of the USER special register at the time that an INSERT statement assigns a default value to the column in the row that is being inserted CURRENT SQLID, which specifies the value of the CURRENT SQLID special register at the time that an INSERT statement assigns a default value to the column in the row that is being inserted The name of a cast function that casts a default value (of a built-in data type) to the distinct type of a column
If you want DEPTNO to be a primary key, as in the sample table, explicitly define the key. Use an ALTER TABLE statement, as in the following example:
ALTER TABLE YDEPT PRIMARY KEY(DEPTNO);
You can use an INSERT statement to copy the rows of the result table of a fullselect from one table to another. The following statement copies all of the rows from DSN8810.DEPT to your own YDEPT work table.
INSERT INTO YDEPT SELECT * FROM DSN8810.DEPT;
20
For information about using the INSERT statement, see Inserting rows: INSERT on page 27.
This statement also creates a referential constraint between the foreign key in YEMP (WORKDEPT) and the primary key in YDEPT (DEPTNO). It also restricts all phone numbers to unique numbers. If you want to change a table definition after you create it, use the statement ALTER TABLE. If you want to change a table name after you create it, use the statement RENAME TABLE. | | | | | | | | | You can change a table definition by using the ALTER TABLE statement only in certain ways. For example, you can add and drop constraints on columns in a table. You can also change the data type of a column within character data types, within numeric data types, and within graphic data types. You can add a column to a table. However, you cannot drop a column from a table. For more information about changing a table definition by using ALTER TABLE, see Part 2 (Volume 1) of DB2 Administration Guide. For other details about the ALTER TABLE and RENAME TABLE statements, see Chapter 5 of DB2 SQL Reference.
21
Temporary tables are especially useful when you need to sort or query intermediate result tables that contain a large number of rows, but you want to store only a small subset of those rows permanently. Temporary tables can also return result sets from stored procedures. For more information, see Writing a stored procedure to return result sets to a DRDA client on page 650. The following sections provide more details on created temporary tables and declared temporary tables.
Example: You can also create this same definition by copying the definition of a base table using the LIKE clause:
CREATE GLOBAL TEMPORARY TABLE TEMPPROD LIKE PROD;
The SQL statements in the previous examples create identical definitions, even though table PROD contains two columns, DESCRIPTION and CURDATE, that are defined as NOT NULL WITH DEFAULT. Unlike the PROD sample table, the DESCRIPTION and CURDATE columns in the TEMPPROD table are defined as NOT NULL and do not have defaults, because created temporary tables do not support non-null default values. After you run one of the two CREATE statements, the definition of TEMPPROD exists, but no instances of the table exist. To drop the definition of TEMPPROD, you must run the following statement:
DROP TABLE TEMPPROD;
To create an instance of TEMPPROD, you must use TEMPPROD in an application. DB2 creates an instance of the table when TEMPPROD is specified in one of the following SQL statements: v OPEN v SELECT v INSERT v DELETE An instance of a created temporary table exists at the current server until one of the following actions occurs: v The application process ends. v The remote server connection through which the instance was created terminates. v The unit of work in which the instance was created completes. When you run a ROLLBACK statement, DB2 deletes the instance of the created temporary table. When you run a COMMIT statement, DB2 deletes the instance
22
of the created temporary table unless a cursor for accessing the created temporary table is defined WITH HOLD and is open. Example: Suppose that you create a definition of TEMPPROD and then run an application that contains the following statements:
EXEC EXEC EXEC . . . EXEC . . . EXEC SQL DECLARE C1 CURSOR FOR SELECT * FROM TEMPPROD; SQL INSERT INTO TEMPPROD SELECT * FROM PROD; SQL OPEN C1; SQL COMMIT; SQL CLOSE C1;
When you run the INSERT statement, DB2 creates an instance of TEMPPROD and populates that instance with rows from table PROD. When the COMMIT statement is run, DB2 deletes all rows from TEMPPROD. However, assume that you change the declaration of cursor C1 to the following declaration:
EXEC SQL DECLARE C1 CURSOR WITH HOLD FOR SELECT * FROM TEMPPROD;
In this case, DB2 does not delete the contents of TEMPPROD until the application ends because C1, a cursor defined WITH HOLD, is open when the COMMIT statement is run. In either case, DB2 drops the instance of TEMPPROD when the application ends.
| |
You can define a declared temporary table in any of the following ways: v Specify all the columns in the table. Unlike columns of created temporary tables, columns of declared temporary tables can include the WITH DEFAULT clause. v Use a LIKE clause to copy the definition of a base table, created temporary table, or view. If the base table or created temporary table that you copy has identity columns, you can specify that the corresponding columns in the declared temporary table are also identity columns. Do that by specifying the INCLUDING IDENTITY COLUMN ATTRIBUTES clause when you define the declared temporary table.
Chapter 2. Working with tables and modifying data
23
v Use a fullselect to choose specific columns from a base table, created temporary table, or view. If the base table, created temporary table, or view from which you select columns has identity columns, you can specify that the corresponding columns in the declared temporary table are also identity columns. Do that by specifying the INCLUDING IDENTITY COLUMN ATTRIBUTES clause when you define the declared temporary table. If you want the declared temporary table columns to inherit the defaults for columns of the table or view that is named in the fullselect, specify the INCLUDING COLUMN DEFAULTS clause. If you want the declared temporary table columns to have default values that correspond to their data types, specify the USING TYPE DEFAULTS clause. Example: The following statement defines a declared temporary table called TEMPPROD by explicitly specifying the columns.
DECLARE GLOBAL (SERIAL DESCRIPTION PRODCOUNT MFGCOST MFGDEPT MARKUP SALESDEPT CURDATE TEMPORARY TABLE TEMPPROD CHAR(8) NOT NULL WITH DEFAULT 99999999, VARCHAR(60) NOT NULL, INTEGER GENERATED ALWAYS AS IDENTITY, DECIMAL(8,2), CHAR(3), SMALLINT, CHAR(3), DATE NOT NULL);
Example: The following statement defines a declared temporary table called TEMPPROD by copying the definition of a base table. The base table has an identity column that the declared temporary table also uses as an identity column.
DECLARE GLOBAL TEMPORARY TABLE TEMPPROD LIKE BASEPROD INCLUDING IDENTITY COLUMN ATTRIBUTES;
Example: The following statement defines a declared temporary table called TEMPPROD by selecting columns from a view. The view has an identity column that the declared temporary table also uses as an identity column. The declared temporary table inherits its default column values from the default column values of a base table underlying the view.
DECLARE GLOBAL TEMPORARY TABLE TEMPPROD AS (SELECT * FROM PRODVIEW) DEFINITION ONLY INCLUDING IDENTITY COLUMN ATTRIBUTES INCLUDING COLUMN DEFAULTS;
After you run a DECLARE GLOBAL TEMPORARY TABLE statement, the definition of the declared temporary table exists as long as the application process runs. If you need to delete the definition before the application process completes, you can do that with the DROP TABLE statement. For example, to drop the definition of TEMPPROD, run the following statement:
DROP TABLE SESSION.TEMPPROD;
DB2 creates an empty instance of a declared temporary table when it runs the DECLARE GLOBAL TEMPORARY TABLE statement. You can populate the declared temporary table using INSERT statements, modify the table using searched or positioned UPDATE or DELETE statements, and query the table using SELECT statements. You can also create indexes on the declared temporary table. The ON COMMIT clause that you specify in the DECLARE GLOBAL TEMPORARY TABLE statement determines whether DB2 keeps or deletes all the
24
rows from the table when you run a COMMIT statement in an application with a declared temporary table. ON COMMIT DELETE ROWS, which is the default, causes all rows to be deleted from the table at a commit point, unless there is a held cursor open on the table at the commit point. ON COMMIT PRESERVE ROWS causes the rows to remain past the commit point. Example: Suppose that you run the following statement in an application program:
EXEC SQL DECLARE GLOBAL TEMPORARY TABLE TEMPPROD AS (SELECT * FROM BASEPROD) DEFINITION ONLY INCLUDING IDENTITY COLUMN ATTRIBUTES INCLUDING COLUMN DEFAULTS ON COMMIT PRESERVE ROWS; EXEC SQL INSERT INTO SESSION.TEMPPROD SELECT * FROM BASEPROD; . . . EXEC SQL COMMIT; . . .
When DB2 runs the preceding DECLARE GLOBAL TEMPORARY TABLE statement, DB2 creates an empty instance of TEMPPROD. The INSERT statement populates that instance with rows from table BASEPROD. The qualifier, SESSION, must be specified in any statement that references TEMPPROD. When DB2 executes the COMMIT statement, DB2 keeps all rows in TEMPPROD because TEMPPROD is defined with ON COMMIT PRESERVE ROWS. When the program ends, DB2 drops TEMPPROD.
Use the DROP TABLE statement with care: Dropping a table is NOT equivalent to deleting all its rows. When you drop a table, you lose more than its data and its definition. You lose all synonyms, views, indexes, and referential and check constraints associated with that table. You also lose all authorities granted on the table. For more information about the DROP statement, see Chapter 5 of DB2 SQL Reference.
25
CREATE VIEW VDEPTM AS SELECT DEPTNO, MGRNO, LASTNAME, ADMRDEPT FROM DSN8810.DEPT, DSN8810.EMP WHERE DSN8810.EMP.EMPNO = DSN8810.DEPT.MGRNO;
When a program accesses the data defined by a view, DB2 uses the view definition to return a set of rows the program can access with SQL statements. To see the departments administered by department D01 and the managers of those departments, run the following statement, which returns information from the VDEPTM view:
SELECT DEPTNO, LASTNAME FROM VDEPTM WHERE ADMRDEPT = DO1;
When you create a view, you can reference the USER and CURRENT SQLID special registers in the CREATE VIEW statement. When referencing the view, DB2 uses the value of the USER or CURRENT SQLID that belongs to the user of the SQL statement (SELECT, UPDATE, INSERT, or DELETE) rather than the creator of the view. In other words, a reference to a special register in a view definition refers to its run-time value. A column in a view might be based on a column in a base table that is an identity column. The column in the view is also an identity column, except under any of the following circumstances: v The column appears more than once in the view. v The view is based on a join of two or more tables. v The view is based on the union of two or more tables. v Any column in the view is derived from an expression that refers to an identity column. You can use views to limit access to certain kinds of data, such as salary information. You can also use views for the following actions: v Make a subset of a tables data available to an application. For example, a view based on the employee table might contain rows only for a particular department. v Combine columns from two or more tables and make the combined data available to an application. By using a SELECT statement that matches values in one table with those in another table, you can create a view that presents data from both tables. However, you can only select data from this type of view. You cannot update, delete, or insert data using a view that joins two or more tables. v Combine rows from two or more tables and make the combined data available to an application. By using two or more subselects that are connected by UNION or UNION ALL operators, you can create a view that presents data from several tables. However, you can only select data from this type of view. You cannot update, delete, or insert data using a view that contains UNION operations. v Present computed data, and make the resulting data available to an application. You can compute such data using any function or operation that you can use in a SELECT statement.
26
v You must have the appropriate authorization to insert, update, or delete rows using the view. v When you use a view to insert a row into a table, the view definition must specify all the columns in the base table that do not have a default value. The row being inserted must contain a value for each of those columns. v Views that you can use to update data are subject to the same referential constraints and check constraints as the tables you used to define the views. v You can use the WITH CHECK option of the CREATE VIEW statement to specify the constraint that every row that is inserted or updated through the view must conform to the definition of the view. You can select every row that is inserted or updated through a view that specifies WITH CHECK.
| | | | |
27
The values that you can insert into a ROWID column or an identity column depend on whether the column is defined with GENERATED ALWAYS or GENERATED BY DEFAULT. See Inserting data into a ROWID column on page 30 and Inserting data into an identity column on page 30 for more information.
After inserting a new department row into your YDEPT table, you can use a SELECT statement to see what you have loaded into the table. The following SQL statement shows you all the new department rows that you have inserted:
SELECT * FROM YDEPT WHERE DEPTNO LIKE E% ORDER BY DEPTNO;
Example: The following statement inserts information about a new employee into the YEMP table. Because YEMP has a foreign key, WORKDEPT, referencing the primary key, DEPTNO, in YDEPT, the value inserted for WORKDEPT (E31) must be a value of DEPTNO in YDEPT or null.
INSERT INTO YEMP VALUES (000400, RUTHERFORD, B, HAYES, E31, 5678, 1983-01-01, MANAGER, 16, M, 1943-07-10, 24000, 500, 1900);
28
Example: The following statement also inserts a row into the YEMP table. Because the unspecified columns allow nulls, DB2 inserts null values into the columns that you do not specify. Because YEMP has a foreign key, WORKDEPT, referencing the primary key, DEPTNO, in YDEPT, the value inserted for WORKDEPT (D11) must be a value of DEPTNO in YDEPT or null.
INSERT INTO YEMP (EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, JOB) VALUES (000410, MILLARD, K, FILLMORE, D11, 4888, MANAGER);
The following statement copies data from DSN8810.EMP into the newly created table:
INSERT INTO TELE SELECT LASTNAME, FIRSTNME, PHONENO FROM DSN8810.EMP WHERE WORKDEPT = D21;
The two previous statements create and fill a table, TELE, that looks similar to the following table:
NAME2 =============== PULASKI JEFFERSON MARINO SMITH JOHNSON PEREZ MONTEVERDE NAME1 ============ EVA JAMES SALVATORE DANIEL SYBIL MARIA ROBERT PHONE ===== 7831 2094 3780 0961 8953 9001 3780
The CREATE TABLE statement example creates a table which, at first, is empty. The table has columns for last names, first names, and phone numbers, but does not have any rows. The INSERT statement fills the newly created table with data selected from the DSN8810.EMP table: the names and phone numbers of employees in department D21. Example: The following CREATE statement creates a table that contains an employees department name as well as phone number. The fullselect within the INSERT statement fills the DLIST table with data from rows selected from two existing tables, DSN8810.DEPT and DSN8810.EMP.
CREATE TABLE DLIST (DEPT CHAR(3) DNAME VARCHAR(36) LNAME VARCHAR(15) FNAME VARCHAR(12) INIT CHAR PHONE CHAR(4) ); NOT NULL, , NOT NULL, NOT NULL, ,
29
INSERT INTO DLIST SELECT DEPTNO, DEPTNAME, LASTNAME, FIRSTNME, MIDINIT, PHONENO FROM DSN8810.DEPT, DSN8810.EMP WHERE DEPTNO = WORKDEPT;
If ROWIDCOL2 is defined as GENERATED ALWAYS, you cannot insert the ROWID column data from T1 into T2, but you can insert the integer column data. To insert only the integer data, use one of the following methods: v Specify only the integer column in your INSERT statement, as in the following statement:
INSERT INTO T2 (INTCOL2) SELECT INTCOL1 FROM T1;
v Specify the OVERRIDING USER VALUE clause in your INSERT statement to tell DB2 to ignore any values that you supply for system-generated columns, as in the following statement:
INSERT INTO T2 (INTCOL2,ROWIDCOL2) OVERRIDING USER VALUE SELECT * FROM T1;
30
column. For information about using identity columns to uniquely identify rows, see Using identity columns as keys on page 270 Before you insert data into an identity column, you must know how the column is defined. Identity columns are defined with the GENERATED ALWAYS or GENERATED BY DEFAULT clause. GENERATED ALWAYS means that DB2 generates a value for the column, and you cannot insert data into that column. If the column is defined as GENERATED BY DEFAULT, you can insert a value, and DB2 provides a default value if you do not supply one. Example: Suppose that tables T1 and T2 have two columns: a character column and an integer column that is defined as an identity column. For the following statement to run successfully, IDENTCOL2 must be defined as GENERATED BY DEFAULT.
INSERT INTO T2 (CHARCOL2,IDENTCOL2) SELECT * FROM T1;
If IDENTCOL2 is defined as GENERATED ALWAYS, you cannot insert the identity column data from T1 into T2, but you can insert the character column data. To insert only the character data, use one of the following methods: v Specify only the character column in your INSERT statement, as in the following statement:
INSERT INTO T2 (CHARCOL2) SELECT CHARCOL1 FROM T1;
v Specify the OVERRIDING USER VALUE clause in your INSERT statement to tell DB2 to ignore any values that you supply for system-generated columns, as in the following statement:
INSERT INTO T2 (CHARCOL2,IDENTCOL2) OVERRIDING USER VALUE SELECT * FROM T1;
| | | | | | | | | | | | | | | | | | | | | | |
Assume that you need to insert a row for a new employee into the EMPSAMP table. To find out the values for the generated EMPNO, HIRETYPE, and HIREDATE columns, use the following SELECT FROM INSERT statement:
31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SELECT EMPNO, HIRETYPE, HIREDATE FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, DEPTNO, LEVEL) VALUES(Mary Smith, 35000.00, 11, Associate));
The SELECT statement returns the DB2-generated identity value for the EMPNO column, the default value New Hire for the HIRETYPE column, and the value of the CURRENT DATE special register for the HIREDATE column. Recommendation: Use the SELECT FROM INSERT statement to insert a row into a parent table and retrieve the value of a primary key that was generated by DB2 (a ROWID or identity column). In another INSERT statement, specify this generated value as a value for a foreign key in a dependent table. For an example of this method, see Parent keys and foreign keys on page 272.
The INSERT statement in the FROM clause of the following SELECT statement inserts a new employee into the EMPSAMP table:
SELECT NAME, SALARY FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, LEVEL) VALUES(Mary Smith, 35000.00, Associate));
The SELECT statement returns a salary of 40000.00 for Mary Smith instead of the initial salary of 35000.00 that was explicitly specified in the INSERT statement.
32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example: You can retrieve all the values for a row that is inserted into a structure:
EXEC SQL SELECT * INTO :empstruct FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, DEPTNO, LEVEL) VALUES(Mary Smith, 35000.00, 11, Associate));
For this example, :empstruct is a host variable structure that is declared with variables for each of the columns in the EMPSAMP table.
The value 12 satisfies the search condition of the view definition, and the result table consists of the value for C1 in the inserted row. If you use a value that does not satisfy the search condition of the view definition, the insert operation fails, and DB2 returns an error.
Example: Using the FETCH FIRST clause: To see only the first five rows that are inserted into the employee photo and resume table, use the FETCH FIRST clause:
EXEC SQL DECLARE CS2 CURSOR FOR SELECT EMP_ROWID FROM FINAL TABLE (INSERT INTO DSN8810.EMP_PHOTO_RESUME (EMPNO) SELECT EMPNO FROM DSN8810.EMP) FETCH FIRST 5 ROWS ONLY;
Example: Using the INPUT SEQUENCE clause: To retrieve rows in the order in which they are inserted, use the INPUT SEQUENCE clause:
33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
EXEC SQL DECLARE CS3 CURSOR FOR SELECT EMP_ROWID FROM FINAL TABLE (INSERT INTO DSN8810.EMP_PHOTO_RESUME (EMPNO) VALUES(:hva_empno) FOR 5 ROWS) ORDER BY INPUT SEQUENCE;
The INPUT SEQUENCE clause can be specified only if an INSERT statement is in the FROM clause of the SELECT statement. In this example, the rows are inserted from an array of employee numbers. For information about the multiple-row INSERT statement, see Inserting multiple rows of data from host variable arrays on page 87. Example: Inserting rows with multiple encoding CCSIDs: Suppose that you want to populate an ASCII table with values from an EBCDIC table and then see selected values from the ASCII table. You can use the following cursor to select the EBCDIC columns, populate the ASCII table, and then retrieve the ASCII values:
EXEC SQL DECLARE CS4 CURSOR FOR SELECT C1, C2 FROM FINAL TABLE (INSERT INTO ASCII_TABLE SELECT * FROM EBCDIC_TABLE);
34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The fetches that occur after the update processing return the rows that were generated during OPEN cursor processing. However, if you use a simple SELECT (with no INSERT statement in the FROM clause), the fetches might return the updated values, depending on the access path that DB2 uses. Effect of WITH HOLD: When you declare a cursor with the WITH HOLD option, and open the cursor, all of the rows are inserted into the target table. The WITH HOLD option has no effect on the SELECT FROM INSERT statement of the cursor definition. After your application performs a commit, you can continue to retrieve all of the inserted rows. For information about held cursors, see Held and non-held cursors on page 122. Example: Assume that the employee table in the DB2 sample application has five rows. Your application declares a WITH HOLD cursor, opens the cursor, fetches two rows, performs a commit, and then fetches the third row successfully:
EXEC SQL DECLARE CS2 CURSOR WITH HOLD FOR SELECT EMP_ROWID FROM FINAL TABLE (INSERT INTO DSN8810.EMP_PHOTO_RESUME (EMPNO) SELECT EMPNO FROM DSN8810.EMP); EXEC SQL OPEN CS2; /* Inserts 5 rows EXEC SQL FETCH CS2 INTO :hv_rowid; /* Retrieves ROWID for 1st row EXEC SQL FETCH CS2 INTO :hv_rowid; /* Retrieves ROWID for 2nd row EXEC SQL COMMIT; /* Commits 5 rows EXEC SQL FETCH CS2 INTO :hv_rowid; /* Retrieves ROWID for 3rd row
*/ */ */ */ */
Effect of SAVEPOINT and ROLLBACK: When you set a savepoint prior to opening the cursor and then roll back to that savepoint, all of the insertions are undone. For information about savepoints and ROLLBACK processing, see Using savepoints to undo selected changes within a unit of work on page 439. Example: Assume that your application declares a cursor, sets a savepoint, opens the cursor, sets another savepoint, rolls back to the second savepoint, and then rolls back to the first savepoint:
EXEC SQL DECLARE CS3 CURSOR FOR SELECT EMP_ROWID FROM FINAL TABLE (INSERT INTO DSN8810.EMP_PHOTO_RESUME (EMPNO) SELECT EMPNO FROM DSN8810.EMP); EXEC SQL SAVEPOINT A ON ROLLBACK RETAIN CURSORS; /* Sets 1st savepoint EXEC SQL OPEN CS3; EXEC SQL SAVEPOINT B ON ROLLBACK RETAIN CURSORS; /* Sets 2nd savepoint ... EXEC SQL ROLLBACK TO SAVEPOINT B; /* Rows still in DSN8810.EMP_PHOTO_RESUME ... EXEC SQL ROLLBACK TO SAVEPOINT A; /* All inserted rows are undone
*/ */ */ */
35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
EXEC SQL SELECT EMPNO INTO :hv_empno FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY) SELECT FIRSTNAME || MIDINIT || LASTNAME, SALARY + 10000.00 FROM DSN8810.EMP)
The addition of 10000.00 causes a decimal overflow to occur, and no rows are inserted into the EMPSAMP table. During OPEN cursor processing: If the insertion of any row fails during the OPEN cursor processing, all previously successful insertions are undone. The result table of the INSERT is empty. During FETCH processing: If the FETCH statement fails while retrieving rows from the result table of the insert operation, a negative SQLCODE is returned to the application, but the result table still contains the original number of rows that was determined during the OPEN cursor processing. At this point, you can undo all of the inserts. Example: Assume that the result table contains 100 rows and the 90th row that is being fetched from the cursor returns a negative SQLCODE:
EXEC SQL DECLARE CS1 CURSOR FOR SELECT EMPNO FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY) SELECT FIRSTNAME || MIDINIT || LASTNAME, SALARY + 10000.00 FROM DSN8810.EMP); EXEC SQL OPEN CS1; /* Inserts 100 rows */ while (SQLCODE == 0) EXEC SQL FETCH CS1 INTO :hv_empno; if (SQLCODE == -904) /* If SQLCODE is -904, undo all inserts */ EXEC SQL ROLLBACK; else /* Else, commit inserts */ EXEC SQL COMMIT;
You cannot update rows in a created temporary table, but you can update rows in a declared temporary table. The SET clause names the columns that you want to update and provides the values you want to assign to those columns. You can replace a column value in the SET clause with any of the following items: v A null value The column to which you assign the null value must not be defined as NOT NULL. v An expression An expression can be any of the following items:
36
A column A constant A fullselect that returns a scalar A host variable A special register
In addition, you can replace one or more column values in the SET clause with the column values in a row that is returned by a fullselect. Next, identify the rows to update: v To update a single row, use a WHERE clause that locates one, and only one, row v To update several rows, use a WHERE clause that locates only the rows you want to update. If you omit the WHERE clause, DB2 updates every row in the table or view with the values you supply. If DB2 finds an error while executing your UPDATE statement (for example, an update value that is too large for the column), it stops updating and returns an error. No rows in the table change. Rows already changed, if any, are restored to their previous values. If the UPDATE statement is successful, SQLERRD(3) is set to the number of rows that are updated. Example: The following statement supplies a missing middle initial and changes the job for employee 000200.
UPDATE YEMP SET MIDINIT = H, JOB = FIELDREP WHERE EMPNO = 000200;
The following statement gives everyone in department D11 a raise of 400.00. The statement can update several rows.
UPDATE YEMP SET SALARY = SALARY + 400.00 WHERE WORKDEPT = D11;
The following statement sets the salary and bonus for employee 000190 to the average salary and minimum bonus for all employees.
UPDATE YEMP SET (SALARY, BONUS) = (SELECT AVG(SALARY), MIN(BONUS) FROM EMP) WHERE EMPNO = 000190;
37
This DELETE statement deletes each row in the YEMP table that has an employee number 000060.
DELETE FROM YEMP WHERE EMPNO = 000060;
When this statement executes, DB2 deletes any row from the YEMP table that meets the search condition. If DB2 finds an error while executing your DELETE statement, it stops deleting data and returns error codes in the SQLCODE and SQLSTATE host variables or related fields in the SQLCA. The data in the table does not change. If the DELETE is successful, SQLERRD(3) in the SQLCA contains the number of deleted rows. This number includes only the number of deleted rows in the table that is specified in the DELETE statement. Rows that are deleted (in other tables) according to the CASCADE rule are not included in SQLERRD(3).
If the statement executes, the table continues to exist (that is, you can insert rows into it), but it is empty. All existing views and authorizations on the table remain intact when using DELETE. By comparison, using DROP TABLE drops all views and authorizations, which can invalidate plans and packages. For information about the DROP statement, see Dropping tables: DROP TABLE on page 25.
38
Figure 2 illustrates how these two tables can be combined using the three outer join functions.
PARTS PART WIRE MAGNETS BLADES PLASTIC OIL PROD# 10 10 205 30 160 PRODUCTS PROD# Matches 505 10 205 30 PRICE 3.70 45.75 18.90 7.55 Unmatched row
Unmatched row
LEFT OUTER JOIN PART WIRE MAGNETS BLADES PLASTIC OIL PROD# PRICE 45.75 10 45.75 10 18.90 205 7.55 30 (null) 160
FULL OUTER JOIN PART WIRE MAGNETS BLADES PLASTIC OIL (null) PROD# 10 10 205 30 160 505 PRICE 45.75 45.75 18.90 7.55 (null) 3.70
RIGHT OUTER JOIN PART PROD# PRICE 45.75 10 WIRE 45.75 MAGNETS 10 18.90 BLADES 205 7.55 PLASTIC 30 3.70 505 (null)
Figure 2. Three outer joins from the PARTS and PRODUCTS tables
The result table contains data joined from all of the tables, for rows that satisfy the search conditions.
39
The result columns of a join have names if the outermost SELECT list refers to base columns. But, if you use a function (such as COALESCE or VALUE) to build a column of the result, that column does not have a name unless you use the AS clause in the SELECT list.
Inner join
To request an inner join, execute a SELECT statement in which you specify the tables that you want to join in the FROM clause, and specify a WHERE clause or an ON clause to indicate the join condition. The join condition can be any simple or compound search condition that does not contain a subquery reference. See Chapter 4 of DB2 SQL Reference for the complete syntax of a join condition. In the simplest type of inner join, the join condition is column1=column2. Example: You can join the PARTS and PRODUCTS tables on the PROD# column to get a table of parts with their suppliers and the products that use the parts. To do this, you can use either one of the following SELECT statements:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT FROM PARTS, PRODUCTS WHERE PARTS.PROD# = PRODUCTS.PROD#; SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT FROM PARTS INNER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD#;
Notice three things about this example: v A part in the parts table (OIL) has product (#160), which is not in the products table. A product (SCREWDRIVER, #505) has no parts listed in the parts table. Neither OIL nor SCREWDRIVER appears in the result of the join. An outer join, however, includes rows where the values in the joined columns do not match. v You can explicitly specify that this join is an inner join (not an outer join). Use INNER JOIN in the FROM clause instead of the comma, and use ON to specify the join condition (rather than WHERE) when you explicitly join tables in the FROM clause. v If you do not specify a WHERE clause in the first form of the query, the result table contains all possible combinations of rows for the tables identified in the FROM clause. You can obtain the same result by specifying a join condition that is always true in the second form of the query, as in the following statement:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT FROM PARTS INNER JOIN PRODUCTS ON 1=1;
In either case, the number of rows in the result table is the product of the number of rows in each table.
40
You can specify more complicated join conditions to obtain different sets of results. For example, to eliminate the suppliers that begin with the letter A from the table of parts, suppliers, product numbers and products, write a query like the following query:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT FROM PARTS INNER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD# AND SUPPLIER NOT LIKE A%;
The result of the query is all rows that do not have a supplier that begins with A. The result table looks like the following output:
PART ======= MAGNETS PLASTIC SUPPLIER ============ BATEMAN PLASTIK_CORP PROD# ===== 10 30 PRODUCT ========== GENERATOR RELAY
Example of joining a table to itself by using an inner join: In the following example, A indicates the first instance of table DSN8810.PROJ and B indicates the second instance of this table. The join condition is such that the value in column PROJNO in table DSN8810.PROJ A must be equal to a value in column MAJPROJ in table DSN8810.PROJ B. The following SQL statement joins table DSN8810.PROJ to itself and returns the number and name of each major project followed by the number and name of the project that is part of it:
SELECT A.PROJNO, A.PROJNAME, B.PROJNO, B.PROJNAME FROM DSN8810.PROJ A, DSN8810.PROJ B WHERE A.PROJNO = B.MAJPROJ;
In this example, the comma in the FROM clause implicitly specifies an inner join, and it acts the same as if the INNER JOIN keywords had been used. When you use the comma for an inner join, you must specify the join condition on the WHERE clause. When you use the INNER JOIN keywords, you must specify the join condition on the ON clause.
41
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT FROM PARTS FULL OUTER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD#;
The result table from the query looks similar to the following output:
PART ======= WIRE MAGNETS PLASTIC BLADES OIL ------SUPPLIER ============ ACWF BATEMAN PLASTIK_CORP ACE_STEEL WESTERN_CHEM -----------PROD# ===== 10 10 30 205 160 --PRODUCT ========== GENERATOR GENERATOR RELAY SAW ----------SCREWDRIVER
Example of Using COALESCE or VALUE: COALESCE is the keyword specified by the SQL standard as a synonym for the VALUE function. This function, by either name, can be particularly useful in full outer join operations, because it returns the first non-null value from the pair of join columns. The product number in the result of the example for Full outer join on page 41 is null for SCREWDRIVER, even though the PRODUCTS table contains a product number for SCREWDRIVER. If you select PRODUCTS.PROD# instead, PROD# is null for OIL. If you select both PRODUCTS.PROD# and PARTS.PROD#, the result contains two columns, both of which contain some null values. You can merge data from both columns into a single column, eliminating the null values, by using the COALESCE function. With the same PARTS and PRODUCTS tables, the following example merges the non-null data from the PROD# columns:
SELECT PART, SUPPLIER, COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM, PRODUCT FROM PARTS FULL OUTER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD#;
The AS clause (AS PRODNUM) provides a name for the result of the COALESCE function.
42
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT, PRICE FROM PARTS LEFT OUTER JOIN PRODUCTS ON PARTS.PROD#=PRODUCTS.PROD# AND PRODUCTS.PRICE>10.00;
A row from the PRODUCTS table is in the result table only if its product number matches the product number of a row in the PARTS table and the price is greater than $10.00 for that row. Rows in which the PRICE value does not exceed $10.00 are included in the result of the join, but the PRICE value is set to null. In this result table, the row for PROD# 30 has null values on the right two columns because the price of PROD# 30 is less than $10.00. PROD# 160 has null values on the right two columns because PROD# 160 does not match another product number.
A row from the PARTS table is in the result table only if its product number matches the product number of a row in the PRODUCTS table and the price is greater than 10.00 for that row. Because the PRODUCTS table can have rows with nonmatching product numbers in the result table, and the PRICE column is in the PRODUCTS table, rows in which PRICE is less than or equal to 10.00 are included in the result. The PARTS columns contain null values for these rows in the result table.
43
DB2 performs the join operation first. The result of the join operation includes rows from one table that do not have corresponding rows from the other table. However, the WHERE clause then excludes the rows from both tables that have null values for the PROD# column. The following statement is a correct SELECT statement to produce the list:
SELECT PART, SUPPLIER, VALUE(X.PROD#, Y.PROD#) AS PRODNUM, PRODUCT FROM (SELECT PART, SUPPLIER, PROD# FROM PARTS WHERE PROD# <> 10) X FULL OUTER JOIN (SELECT PROD#, PRODUCT FROM PRODUCTS WHERE PROD# <> 10) Y ON X.PROD# = Y.PROD#;
For this statement, DB2 applies the WHERE clause to each table separately. DB2 then performs the full outer join operation, which includes rows in one table that do not have a corresponding row in the other table. The final result includes rows with the null value for the PROD# column and looks similar to the following output:
PART ======= OIL BLADES PLASTIC ------SUPPLIER ============ WESTERN_CHEM ACE_STEEL PLASTIK_CORP -----------PRODNUM ======= 160 205 30 505 PRODUCT =========== ----------SAW RELAY SCREWDRIVER
44
DB2 determines the intermediate and final results of the previous query by performing the following logical steps: 1. Join the employee and project tables on the employee number, dropping the rows with no matching employee number in the project table. 2. Join the intermediate result table with the department table on matching department numbers. 3. Process the select list in the final result table, leaving only four columns. Using more than one join type: You can use more than one join type in the FROM clause. Suppose that you want a result table that shows employees whose last name begins with S or a letter after S, their department names, and the projects that they are responsible for, if any. You can use the following SELECT statement:
SELECT EMPNO, LASTNAME, DEPTNAME, PROJNO FROM DSN8810.EMP INNER JOIN DSN8810.DEPT ON WORKDEPT = DSN8810.DEPT.DEPTNO LEFT OUTER JOIN DSN8810.PROJ ON EMPNO = RESPEMP WHERE LASTNAME > S;
45
000180 000190 000250 000280 000300 000310 200170 200280 200310 200330
SCOUTTEN WALKER SMITH SCHNEIDER SMITH SETRIGHT YAMAMOTO SCHWARTZ SPRINGER WONG
MANUFACTURING SYSTEMS MANUFACTURING SYSTEMS ADMINISTRATION SYSTEMS OPERATIONS OPERATIONS OPERATIONS MANUFACTURING SYSTEMS OPERATIONS OPERATIONS SOFTWARE SUPPORT
----------AD3112 ------------------------------------
DB2 determines the intermediate and final results of the previous query by performing the following logical steps: 1. Join the employee and department tables on matching department numbers, dropping the rows where the last name begins with a letter before S. 2. Join the intermediate result table with the project table on the employee number, keeping the rows with no matching employee number in the project table. 3. Process the select list in the final result table, leaving only four columns.
Example of using correlated references: In the following example, the correlation name that is used for the nested table expression is CHEAP_PARTS. The correlated references are CHEAP_PARTS.PROD# and CHEAP_PARTS.PRODUCT.
SELECT CHEAP_PARTS.PROD#, CHEAP_PARTS.PRODUCT FROM (SELECT PROD#, PRODUCT FROM PRODUCTS WHERE PRICE < 10) AS CHEAP_PARTS;
46
The correlated references are valid because they do not occur in the table expression where CHEAP_PARTS is defined. The correlated references are from a table specification at a higher level in the hierarchy of subqueries. Example of using a nested table expression as the left operand of a join: The following query contains a fullselect as the left operand of a left outer join with the PRODUCTS table. The correlation name is PARTX.
SELECT PART, SUPPLIER, PRODNUM, PRODUCT FROM (SELECT PART, PROD# AS PRODNUM, SUPPLIER FROM PARTS WHERE PROD# < 200) AS PARTX LEFT OUTER JOIN PRODUCTS ON PRODNUM = PROD#;
Because PROD# is a character field, DB2 does a character comparison to determine the set of rows in the result. Therefore, because 30 is greater than 200, the row in which PROD# is equal to 30 does not appear in the result. Example: Using a table function as an operand of a join: You can join the results of a user-defined table function with a table, just as you can join two tables. For example, suppose CVTPRICE is a table function that converts the prices in the PRODUCTS table to the currency you specify and returns the PRODUCTS table with the prices in those units. You can obtain a table of parts, suppliers, and product prices with the prices in your choice of currency by executing a query similar to the following query:
SELECT PART, SUPPLIER, PARTS.PROD#, Z.PRODUCT, Z.PRICE FROM PARTS, TABLE(CVTPRICE(:CURRENCY)) AS Z WHERE PARTS.PROD# = Z.PROD#;
47
A table function or a table expression that contains correlated references to other tables in the same FROM clause cannot participate in a full outer join or a right outer join. The following examples illustrate valid uses of correlated references in table specifications. Example: In this example, the correlated reference T.C2 is valid because the table specification, to which it refers, T, is to its left.
SELECT T.C1, Z.C5 FROM T, TABLE(TF3(T.C2)) AS Z WHERE T.C3 = Z.C4;
If you specify the join in the opposite order, with T following TABLE(TF3(T.C2), then T.C2 is invalid. Example: In this example, the correlated reference D.DEPTNO is valid because the nested table expression within which it appears is preceded by TABLE and the table specification D appears to the left of the nested table expression in the FROM clause.
SELECT D.DEPTNO, D.DEPTNAME, EMPINFO.AVGSAL, EMPINFO.EMPCOUNT FROM DEPT D, TABLE(SELECT AVG(E.SALARY) AS AVGSAL, COUNT(*) AS EMPCOUNT FROM EMP E WHERE E.WORKDEPT=D.DEPTNO) AS EMPINFO;
48
Conceptual overview
Suppose that you want a list of the employee numbers, names, and commissions of all employees working on a particular project, whose project number is MA2111. The first part of the SELECT statement is easy to write:
SELECT EMPNO, LASTNAME, COMM FROM DSN8810.EMP WHERE EMPNO . . .
But you cannot proceed because the DSN8810.EMP table does not include project number data. You do not know which employees are working on project MA2111 without issuing another SELECT statement against the DSN8810.EMPPROJACT table. You can use a subquery to solve this problem. A subquery is a subselect or a fullselect in a WHERE clause. The SELECT statement surrounding the subquery is called the outer SELECT.
SELECT EMPNO, LASTNAME, COMM FROM DSN8810.EMP WHERE EMPNO IN (SELECT EMPNO FROM DSN8810.EMPPROJACT WHERE PROJNO = MA2111);
To better understand the results of this SQL statement, imagine that DB2 goes through the following process: 1. DB2 evaluates the subquery to obtain a list of EMPNO values:
(SELECT EMPNO FROM DSN8810.EMPPROJACT WHERE PROJNO = MA2111);
The result is in an interim result table, similar to the one shown in the following output:
from EMPNO ===== 200 200 220
2. The interim result table then serves as a list in the search condition of the outer SELECT. Effectively, DB2 executes this statement:
Copyright IBM Corp. 1983, 2008
49
SELECT EMPNO, LASTNAME, COMM FROM DSN8810.EMP WHERE EMPNO IN (000200, 000220);
The predicate can be part of a WHERE or HAVING clause. A WHERE or HAVING clause can include predicates that contain subqueries. A predicate containing a subquery, like any other search predicate, can be enclosed in parentheses, can be preceded by the keyword NOT, and can be linked to other predicates through the keywords AND and OR. For example, the WHERE clause of a query can look something like the following clause:
WHERE X IN (subquery1) AND (Y > SOME (subquery2) OR Z IS NULL)
Subqueries can also appear in the predicates of other subqueries. Such subqueries are nested subqueries at some level of nesting. For example, a subquery within a subquery within an outer SELECT has a nesting level of 2. DB2 allows nesting down to a level of 15, but few queries require a nesting level greater than 1. The relationship of a subquery to its outer SELECT is the same as the relationship of a nested subquery to a subquery, and the same rules apply, except where otherwise noted.
50
SELECT EMPNO, LASTNAME FROM DSN8810.EMP WHERE (SALARY, BONUS) IN (SELECT AVG(SALARY), AVG(BONUS) FROM DSN8810.EMP);
Except for a subquery of a basic predicate, the result table can contain more than one row. For more information, see Basic predicate .
Basic predicate
You can use a subquery immediately after any of the comparison operators. If you do, the subquery can return at most one value. DB2 compares that value with the value to the left of the comparison operator. Example: The following SQL statement returns the employee numbers, names, and salaries for employees whose education level is higher than the average company-wide education level.
SELECT EMPNO, LASTNAME, SALARY FROM DSN8810.EMP WHERE EDLEVEL > (SELECT AVG(EDLEVEL) FROM DSN8810.EMP);
51
If a subquery that returns one or more null values gives you unexpected results, see the description of quantified predicates in Chapter 2 of DB2 SQL Reference.
To satisfy this WHERE clause, the column value must be greater than all of the values that the subquery returns. A subquery that returns an empty result table satisfies the predicate. Now suppose that you use the <> operator with ALL in a WHERE clause like this:
WHERE (column1, column1, ... columnn) <> ALL (subquery)
To satisfy this WHERE clause, each column value must be unequal to all of the values in the corresponding column of the result table that the subquery returns. A subquery that returns an empty result table satisfies the predicate.
To satisfy this WHERE clause, the value in the expression must be greater than at least one of the values (that is, greater than the lowest value) that the subquery returns. A subquery that returns an empty result table does not satisfy the predicate. Now suppose that you use the = operator with SOME in a WHERE clause like this:
WHERE (column1, column1, ... columnn) = SOME (subquery)
To satisfy this WHERE clause, each column value must be equal to at least one of the values in the corresponding column of the result table that the subquery returns. A subquery that returns an empty result table does not satisfy the predicate.
IN keyword
You can use IN to say that the value or values on the left side of the IN operator must be among the values that are returned by the subquery. Using IN is equivalent to using = ANY or = SOME. Example: The following query returns the names of department managers:
SELECT EMPNO,LASTNAME FROM DSN8810.EMP WHERE EMPNO IN (SELECT DISTINCT MGRNO FROM DSN8810.DEPT);
52
EXISTS keyword
In the subqueries presented thus far, DB2 evaluates the subquery and uses the result as part of the WHERE clause of the outer SELECT. In contrast, when you use the keyword EXISTS, DB2 simply checks whether the subquery returns one or more rows. Returning one or more rows satisfies the condition; returning no rows does not satisfy the condition. Example: The search condition in the following query is satisfied if any project that is represented in the project table has an estimated start date that is later than 1 January 2005:
SELECT EMPNO,LASTNAME FROM DSN8810.EMP WHERE EXISTS (SELECT * FROM DSN8810.PROJ WHERE PRSTDATE > 2005-01-01);
The result of the subquery is always the same for every row that is examined for the outer SELECT. Therefore, either every row appears in the result of the outer SELECT or none appears. A correlated subquery is more powerful than the uncorrelated subquery that is used in this example because the result of a correlated subquery is evaluated for each row of the outer SELECT. As shown in the example, you do not need to specify column names in the subquery of an EXISTS clause. Instead, you can code SELECT *. You can also use the EXISTS keyword with the NOT keyword in order to select rows when the data or condition you specify does not exist; that is, you can code the following clause:
WHERE NOT EXISTS (SELECT ...);
53
education level to the average of the entire company, which requires looking at the entire table. A correlated subquery evaluates only the department that corresponds to the particular employee. In the subquery, you tell DB2 to compute the average education level for the department number in the current row. A query that does this follows:
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL FROM DSN8810.EMP X WHERE EDLEVEL > (SELECT AVG(EDLEVEL) FROM DSN8810.EMP WHERE WORKDEPT = X.WORKDEPT);
A correlated subquery looks like an uncorrelated one, except for the presence of one or more correlated references. In the example, the single correlated reference is the occurrence of X.WORKDEPT in the WHERE clause of the subselect. In this clause, the qualifier X is the correlation name that is defined in the FROM clause of the outer SELECT statement. X designates rows of the first instance of DSN8810.EMP. At any time during the execution of the query, X designates the row of DSN8810.EMP to which the WHERE clause is being applied. Consider what happens when the subquery executes for a given row of DSN8810.EMP. Before it executes, X.WORKDEPT receives the value of the WORKDEPT column for that row. Suppose, for example, that the row is for Christine Haas. Her work department is A00, which is the value of WORKDEPT for that row. Therefore, the following is the subquery that is executed for that row:
(SELECT AVG(EDLEVEL) FROM DSN8810.EMP WHERE WORKDEPT = A00);
The subquery produces the average education level of Christines department. The outer SELECT then compares this average to Christines own education level. For some other row for which WORKDEPT has a different value, that value appears in the subquery in place of A00. For example, in the row for Michael L Thompson, this value is B01, and the subquery for his row delivers the average education level for department B01. The result table produced by the query is similar to the following output:
EMPNO ====== 000010 000030 000070 000090 LASTNAME ========= HASS KWAN PULASKI HENDERSON WORKDEPT EDLEVEL ======== ======= A00 18 C01 20 D21 16 E11 16
54
When you use a correlated reference in a subquery, the correlation name can be defined in the outer SELECT or in any of the subqueries that contain the reference. Suppose, for example, that a query contains subqueries A, B, and C, and that A contains B and B contains C. The subquery C can use a correlation reference that is defined in B, A, or the outer SELECT. You can define a correlation name for each table name in a FROM clause. Specify the correlation name after its table name. Leave one or more blanks between a table name and its correlation name. You can include the word AS between the table name and the correlation name to increase the readability of the SQL statement. The following example demonstrates the use of a correlated reference in the search condition of a subquery:
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL FROM DSN8810.EMP AS X WHERE EDLEVEL > (SELECT AVG(EDLEVEL) FROM DSN8810.EMP WHERE WORKDEPT = X.WORKDEPT);
The following example demonstrates the use of a correlated reference in the select list of a subquery:
UPDATE BP1TBL T1 SET (KEY1, CHAR1, VCHAR1) = (SELECT VALUE(T2.KEY1,T1.KEY1), VALUE(T2.CHAR1,T1.CHAR1), VALUE(T2.VCHAR1,T1.VCHAR1) FROM BP2TBL T2 WHERE (T2.KEY1 = T1.KEY1)) WHERE KEY1 IN (SELECT KEY1 FROM BP2TBL T3 WHERE KEY2 > 0);
As DB2 examines each row in the DSN8810.PROJ table, it determines the maximum activity end date (the ACENDATE column) for all activities of the project (from the DSN8810.PROJACT table). If the end date of each activity associated with the project is before September 2004, the current row in the DSN8810.PROJ table qualifies and DB2 updates it.
55
To process this statement, DB2 determines for each project (represented by a row in the DSN8810.PROJ table) whether or not the combined staffing for that project is less than 0.5. If it is, DB2 deletes that row from the DSN8810.PROJ table. To continue this example, suppose DB2 deletes a row in the DSN8810.PROJ table. You must also delete rows related to the deleted project in the DSN8810.PROJACT table. To do this, use:
DELETE FROM DSN8810.PROJACT X WHERE NOT EXISTS (SELECT * FROM DSN8810.PROJ WHERE PROJNO = X.PROJNO);
DB2 determines, for each row in the DSN8810.PROJACT table, whether a row with the same project number exists in the DSN8810.PROJ table. If not, DB2 deletes the row in DSN8810.PROJACT.
This example uses a copy of the employee table for the subquery. The following statement, without a correlated subquery, yields equivalent results:
DELETE FROM YEMP WHERE (SALARY, WORKDEPT) IN (SELECT MAX(SALARY), WORKDEPT FROM YEMP GROUP BY WORKDEPT);
56
| |
the deletion occurs. However, if the result of the subquery is materialized before the deletion, the delete rule can also be CASCADE or SET NULL. Example: Without referential constraints, the following statement deletes departments from the department table whose managers are not listed correctly in the employee table:
DELETE FROM DSN8810.DEPT THIS WHERE NOT DEPTNO = (SELECT WORKDEPT FROM DSN8810.EMP WHERE EMPNO = THIS.MGRNO);
| |
With the referential constraints that are defined for the sample tables, this statement causes an error because the result table for the subquery is not materialized before the deletion occurs. The deletion involves the table that is referred to in the subquery (DSN8810.EMP is a dependent table of DSN8810.DEPT) and the last delete rule in the path to EMP is SET NULL, not RESTRICT or NO ACTION. If the statement could execute, its results would depend on the order in which DB2 accesses the rows. Therefore, DB2 prohibits the deletion. See Materialization on page 837 for more information about materialization.
57
58
You can execute most of the interactive SQL examples shown in Part 1, Using SQL queries, on page 1 by following the instructions provided in this chapter and using the sample tables shown in Appendix A, DB2 sample tables, on page 993. The instructions assume that ISPF is available to you. # # # You can use the TSO PROFILE command to control whether message IDs are displayed. To view message IDs, type TSO PROFILE MSGID on the ISPF command line. To suppress message IDs, type TSO PROFILE NOMSGID.
59
DSNESP01 SPUFI SSID: DSN ===> Enter the input data set name: (Can be sequential or partitioned) 1 DATA SET NAME..... ===> EXAMPLES(XMP1) 2 VOLUME SERIAL..... ===> (Enter if not cataloged) 3 DATA SET PASSWORD. ===> (Enter if password protected) Enter the output data set name: (Must be a sequential data set) 4 DATA SET NAME..... ===> RESULT Specify processing options: 5 CHANGE DEFAULTS... ===> 6 EDIT INPUT........ ===> 7 EXECUTE........... ===> 8 AUTOCOMMIT........ ===> 9 BROWSE OUTPUT..... ===> For remote SQL processing: 10 CONNECT LOCATION ===>
Y Y Y Y Y
Display SPUFI defaults panel?) Enter SQL statements?) Execute SQL statements?) Commit after successful run?) Browse output data set?)
END to exit
| | | |
Fill out the SPUFI panel. You can access descriptions for each of the fields in the panel in the DB2I help system. See DB2I help on page 518 for more information about the DB2I help system. The following descriptions explain the information that you need to provide on the SPUFI panel. 1,2,3 INPUT DATA SET NAME Identify the input data set in fields 1 through 3. This data set contains one or more SQL statements that you want to execute. Allocate this data set before you use SPUFI, if one does not already exist. Consider the following rules: v The name of the data set must conform to standard TSO naming conventions. v The data set can be empty before you begin the session. You can then add the SQL statements by editing the data set from SPUFI. v The data set can be either sequential or partitioned, but it must have the following DCB characteristics: A record format (RECFM) of either F or FB. A logical record length (LRECL) of either 79 or 80. Use 80 for any data set that the EXPORT command of DB2 QMF did not create. v Data in the data set can begin in column 1. It can extend to column 71 if the logical record length is 79, and to column 72 if the logical record length is 80. SPUFI assumes that the last 8 bytes of each record are for sequence numbers. If you use this panel a second time, the name of the data set you previously used displays in the field DATA SET NAME. To create a new member of an existing partitioned data set, change only the member name. 4 OUTPUT DATA SET NAME Enter the name of a data set to receive the output of the SQL statement. You do not need to allocate the data set before you do this. If the data set exists, the new output replaces its content. If the data set does not exist, DB2 allocates a data set on the device type specified on the CURRENT SPUFI DEFAULTS panel and then catalogs the new data set. The device must be a direct-access storage device, and you must be authorized to allocate space on that device.
60
Attributes required for the output data set are: v Organization: sequential v Record format: F, FB, FBA, V, VB, or VBA v Record length: 80 to 32768 bytes, not less than the input data set Figure 3 on page 60 shows the simplest choice, entering RESULT. SPUFI allocates a data set named userid.RESULT and sends all output to that data set. If a data set named userid.RESULT already exists, SPUFI sends DB2 output to it, replacing all existing data. 5 CHANGE DEFAULTS Allows you to change control values and characteristics of the output data set and format of your SPUFI session. If you specify Y(YES) you can look at the SPUFI defaults panel. See Changing SPUFI defaults on page 62 for more information about the values you can specify and how they affect SPUFI processing and output characteristics. You do not need to change the SPUFI defaults for this example. 6 EDIT INPUT To edit the input data set, leave Y(YES) on line 6. You can use the ISPF editor to create a new member of the input data set and enter SQL statements in it. (To process a data set that already contains a set of SQL statements you want to execute immediately, enter N (NO). Specifying N bypasses the step described in Entering SQL statements on page 66.) 7 EXECUTE To execute SQL statements contained in the input data set, leave Y(YES) on line 7. SPUFI handles the SQL statements that can be dynamically prepared. For a list of those SQL statements, see Appendix H, Characteristics of SQL statements in DB2 UDB for z/OS, on page 1113. 8 AUTOCOMMIT To make changes to the DB2 data permanent, leave Y(YES) on line 8. Specifying Y makes SPUFI issue COMMIT if all statements execute successfully. If all statements do not execute successfully, SPUFI issues a ROLLBACK statement, which deletes changes already made to the file (back to the last commit point). For information about the COMMIT and ROLLBACK functions, see Unit of work in TSO batch and online on page 432 or Chapter 5 of DB2 SQL Reference. If you specify N, DB2 displays the SPUFI COMMIT OR ROLLBACK panel after it executes the SQL in your input data set. That panel prompts you to COMMIT, ROLLBACK, or DEFER any updates made by the SQL. If you enter DEFER, you neither commit nor roll back your changes. 9 BROWSE OUTPUT To look at the results of your query, leave Y(YES) on line 9. SPUFI saves the results in the output data set. You can look at them at any time, until you delete or write over the data set. For more information, see Format of SELECT statement results on page 70. 10 CONNECT LOCATION Specify the name of the database server, if applicable, to which you want to submit SQL statements. SPUFI then issues a type 2 CONNECT statement to this server.
61
SPUFI is a locally bound package. SQL statements in the input data set can process only if the CONNECT statement is successful. If the connect request fails, the output data set contains the resulting SQL return codes and error messages. # # # # Important: Ensure that the TSO terminal CCSID matches the DB2 CCSID. If these CCSIDs do not match, data corruption can occur. If SPUFI issues the warning message DSNE345I, terminate your SPUFI session and notify the system administrator.
END to exit
| | |
If you want to change the current default values, specify new values in the fields of the panel. All fields must contain a value. The DB2I help system contains detailed descriptions of each of the fields of the CURRENT SPUFI DEFAULTS panel.The following descriptions explain the information you need to provide on the CURRENT SPUFI DEFAULTS panel. 1 SQL TERMINATOR Allows you to specify the character that you use to end each SQL statement. You can specify any character except the characters listed in Table 3. A semicolon (;) is the default SQL terminator.
Table 3. Invalid special characters for the SQL terminator Name blank comma , Character Hexadecimal representation X'40' X'5E'
62
Table 3. Invalid special characters for the SQL terminator (continued) Name double quote left parenthesis right parenthesis single quote underscore Character ( ) _ Hexadecimal representation X'7F' X'4D' X'5D' X'7D' X'6D'
Use a character other than a semicolon if you plan to execute a statement that contains embedded semicolons. For example, suppose you choose the character # as the statement terminator. Then a CREATE TRIGGER statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END#
# # # # # # # # # #
A CREATE PROCEDURE statement with embedded semicolons looks like the following statement:
CREATE PROCEDURE PROC1 (IN PARM1 INT, OUT SCODE INT) LANGUAGE SQL BEGIN DECLARE SQLCODE INT; DECLARE EXIT HANDLER FOR SQLEXCEPTION SET SCODE = SQLCODE; UPDATE TBL1 SET COL1 = PARM1; END #
Be careful to choose a character for the SQL terminator that is not used within the statement. You can also set or change the SQL terminator within a SPUFI input data set by using the --#SET TERMINATOR statement. See Entering SQL statements on page 66 for details. 2 ISOLATION LEVEL Allows you to specify the isolation level for your SQL statements. See The ISOLATION option on page 412 for more information. 3 MAX SELECT LINES The maximum number of output lines that a SELECT statement can return. To limit the number of rows retrieved, enter another maximum number greater than 1. # # # # # # # # 4 ALLOW SQL WARNINGS Enter NO (the default) or YES to indicate whether SPUFI will continue to process an SQL statement after receiving SQL warnings: NO If a warning occurs when SPUFI executes an OPEN or FETCH for a SELECT statement, SPUFI stops processing the SELECT statement. If SQLCODE +802 occurs when SPUFI executes a FETCH for a SELECT statement, SPUFI continues to process the SELECT statement.
63
# # # # # # # # # # #
YES
If a warning occurs when SPUFI executes an OPEN or FETCH for a SELECT statement, SPUFI continues to process the SELECT statement.
5 CHANGE PLAN NAMES If you enter YES in this field, you can change plan names on a subsequent SPUFI defaults panel, DSNESP07. Enter YES in this field only if you are certain that you want to change the plan names that are used by SPUFI. Consult with your DB2 system administrator if you are uncertain whether you want to change the plan names. Using an invalid or incorrect plan name might cause SPUFI to experience operational errors or it might cause data contamination. 6 RECORD LENGTH The record length must be at least 80 bytes. The maximum record length depends on the device type you use. The default value allows a 4092-byte record. Each record can hold a single line of output. If a line is longer than a record, the output is truncated, and SPUFI discards fields that extend beyond the record length. 7 BLOCKSIZE Follow the normal rules for selecting the block size. For record format F, the block size is equal to the record length. For FB and FBA, choose a block size that is an even multiple of LRECL. For VB and VBA only, the block size must be 4 bytes larger than the block size for FB or FBA. 8 RECORD FORMAT Specify F, FB, FBA, V, VB, or VBA. FBA and VBA formats insert a printer control character after the number of lines specified in the LINES/PAGE OF LISTING field on the DB2I Defaults panel. The record format default is VB (variable-length blocked). 9 DEVICE TYPE Allows you to specify a standard z/OS name for direct-access storage device types. The default is SYSDA. SYSDA specifies that z/OS is to select an appropriate direct access storage device. 10 MAX NUMERIC FIELD The maximum width of a numeric value column in your output. Choose a value greater than 0. The default is 33. For more information, see Format of SELECT statement results on page 70. 11 MAX CHAR FIELD The maximum width of a character value column in your output. DATETIME and GRAPHIC data strings are externally represented as characters, and SPUFI includes their defaults with the default values for character fields. Choose a value greater than 0. The IBM-supplied default is 80. For more information, see Format of SELECT statement results on page 70. 12 COLUMN HEADING You can specify NAMES, LABELS, ANY, or BOTH for column headings. v NAMES (default) uses column names only. v LABELS uses column labels. Leave the title blank if no label exists. v ANY uses existing column labels or column names. v BOTH creates two title lines, one with names and one with labels. Column names are the column identifiers that you can use in SQL statements. If an SQL statement has an AS clause for a column, SPUFI
64
displays the contents of the AS clause in the heading, rather than the column name. You define column labels with LABEL statements. # # # # # # # # # # # HEX 13 FOR BIT DATA Specify how SPUFI is to display the data from FOR BIT DATA columns. ASIS SPUFI displays the data from FOR BIT DATA columns as it is stored. The default is ASIS. If you specify ASIS when a graphic character set is in effect, SPUFI replaces all occurrences of the shift-out (X'0E') and shift-in (X'0F') characters in the output of the FOR BIT DATA columns with a substitution character of '.' (X'4B'). To avoid this substitution, specify HEX. SPUFI displays the data from FOR BIT DATA columns in hexadecimal format.
When you have entered your SPUFI options, press the ENTER key to continue. SPUFI then processes the next processing option for which you specified YES. If all other processing options are NO, SPUFI displays the SPUFI panel. If you press the END key, you return to the SPUFI panel, but you lose all the changes you made on the SPUFI Defaults panel. If you press ENTER, SPUFI saves your changes. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
END to exit
Specify values for the following options on the CURRENT SPUFI DEFAULTS PANEL 2 panel. All fields must contain a value. Using an invalid or incorrect plan name might cause SPUFI to experience operational errors or it might cause data contamination.
65
# # # # # # # # # # # # # #
1 CS ISOLATION PLAN Specify the name of the plan that SPUFI uses when you specify an isolation level of cursor stability (CS). By default, this name is DSNESPCS. 2 RR ISOLATION PLAN Specify the name of the plan that SPUFI uses when you specify an isolation level of repeatable read (RR). By default, this name is DSNESPRR. 3 BLANK CCSID ALERT Indicate whether to receive message DSNE345I when the terminal CCSID setting is blank. A blank terminal CCSID setting occurs when the terminal code page and character set cannot be queried or if they are not supported by ISPF. Recommendation: To avoid possible data contamination use the default setting of YES, unless you are specifically directed by your DB2 system administrator to use NO.
66
EDIT --------userid.EXAMPLES(XMP1) --------------------- COLUMNS 001 072 COMMAND INPUT ===> SAVE SCROLL ===> PAGE ********************************** TOP OF DATA *********************** 000100 SELECT LASTNAME, FIRSTNME, PHONENO 000200 FROM DSN8810.EMP 000300 WHERE WORKDEPT= D11 000400 ORDER BY LASTNAME; ********************************* BOTTOM OF DATA *********************
Pressing the END PF key saves the data set. You can save the data set and continue editing it by entering the SAVE command. Saving the data set after every 10 minutes or so of editing is recommended. Figure 6 shows what the panel looks like if you enter the sample SQL statement, followed by a SAVE command. You can bypass the editing step by resetting the EDIT INPUT processing option:
EDIT INPUT ... ===> NO
# # # #
Entering comments
# # # # You can put comments about SQL statements either on separate lines or on the same line. In either case, use two hyphens (--) to begin a comment. Specify any text other than #SET TERMINATOR or #SET TOLWARN after the comment. DB2 ignores everything to the right of the two hyphens.
Be careful to choose a character for the SQL terminator that is not used within the statement.
67
# # # # # # # # # # # # # # # # # #
Your SQL statement might take a long time to execute, depending on how large a table DB2 must search, or on how many rows DB2 must process. To interrupt DB2s processing, press the PA1 key and respond to the prompting message that asks you if you really want to stop processing. This cancels the executing SQL statement and returns you to the ISPF-PDF menu. What happens to the output data set? This depends on how much of the input data set DB2 was able to process before you interrupted its processing. DB2 might not have opened the output data set yet, or the output data set might contain all or part of the results data that are produced so far.
68
DSNESP04 ===>
SSID: DSN
The following SQL statement has encountered an SQLCODE of -905 or -495: Statement text
Your SQL statement has exceeded the resource utilization threshold set by your site administrator. You must ROLLBACK or COMMIT all the changes made since the last COMMIT. SPUFI processing for the current input file will terminate immediately after the COMMIT or ROLLBACK is executed. 1 NEXT ACTION ===> ENTER to process (Enter COMMIT or ROLLBACK) HELP for more information
PRESS:
If you execute an SQL statement through SPUFI that runs longer than the warning time limit for predictive governing, SPUFI displays a panel that lets you tell DB2 to continue executing that statement, or stop processing that statement and continue to the next statement in the SPUFI input data set. That panel is shown in Figure 8.
DSNESP05 ===> SQL STATEMENT RESOURCE LIMIT EXCEEDED SSID: DSN
The following SQL statement has encountered an SQLCODE of 495: Statement text
You can now either CONTINUE executing this statement or BYPASS the execution of this statement. SPUFI processing for the current input file will continue after the CONTINUE or BYPASS processing is completed. 1 NEXT ACTION ===> ENTER to process (Enter CONTINUE or BYPASS) HELP for more information
PRESS:
For information on the DB2 governor and how to set error and warning time limits, see Part 5 (Volume 2) of DB2 Administration Guide.
69
At the end of the data set are summary statistics that describe the processing of the input data set as a whole. For SELECT statements executed with SPUFI, the message SQLCODE IS 100 indicates an error-free result. If the message SQLCODE IS 100 is the only result, DB2 is unable to find any rows that satisfy the condition specified in the statement. For all other types of SQL statements executed with SPUFI, the message SQLCODE IS 0 indicates an error-free result.
BROWSE-- userid.RESULT COLUMNS 001 072 COMMAND INPUT ===> SCROLL ===> PAGE --------+---------+---------+---------+---------+---------+---------+---------+ SELECT LASTNAME, FIRSTNME, PHONENO 00010000 FROM DSN8810.EMP 00020000 WHERE WORKDEPT = D11 00030000 ORDER BY LASTNAME; 00040000 ---------+---------+---------+---------+---------+---------+---------+---------+ LASTNAME FIRSTNME PHONENO ADAMSON BRUCE 4510 BROWN DAVID 4501 JOHN REBA 0672 JONES WILLIAM 0942 LUTZ JENNIFER 0672 PIANKA ELIZABETH 3782 SCOUTTEN MARILYN 1682 STERN IRVING 6423 WALKER JAMES 2986 YAMAMOTO KIYOSHI 2890 YOSHIMURA MASATOSHI 2890 DSNE610I NUMBER OF ROWS DISPLAYED IS 11 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 100 ---------+---------+---------+---------+---------+---------+------------+---------+---------+---------+---------+---------+---DSNE617I COMMIT PERFORMED, SQLCODE IS 0 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0 ---------+---------+---------+---------+---------+---------+---DSNE601I SQL STATEMENTS ASSUMED TO BE BETWEEN COLUMNS 1 AND 72 DSNE620I NUMBER OF SQL STATEMENTS PROCESSED IS 1 DSNE621I NUMBER OF INPUT RECORDS READ IS 4 DSNE622I NUMBER OF OUTPUT RECORDS WRITTEN IS 30
70
v A DBCLOB column value displays in the same way as a VARGRAPHIC column value. v A heading identifies each selected column, and repeats at the top of each output page. The contents of the heading depend on the value you specified in field COLUMN HEADING of the CURRENT SPUFI DEFAULTS panel.
71
72
| | | |
| | |
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 77 . . 78 . . 78 . . 79 . . 79 . . 80 . . 81 . . 82 . . 83 . . 83 . . 85 . . 85 . . 86 . . 87 . . 87 . . 87 . . 90 . . 90 . . 90 . . 91 . . 91 . . 92 . . 93 . . 93 . . 94 . . 94 . . 95 . . 98 . . 99 . . 100 . . 100 . . 100 . . . . . . . . . . . . . . . . . . . . . . . . 103 . 103 . 103 . 105 . 105 . 106 . 106 . 107 . 107 . 108 . 108 . 108 . 108 . 109 . 109 . 109 . 109 . 111 . 112 . 113 . 113 . 113 . 113
| | | | | | | | | | |
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arrays . . . . . . . . . . . . . . . . . . . . . .
73
Using a non-scrollable cursor . . . Using a scrollable cursor. . . . . Comparison of scrollable cursors . . Holes in the result table of a scrollable Held and non-held cursors . . . . . Examples of using cursors . . . . . .
. . . . . . . . . cursor . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Chapter 8. Generating declarations for your tables using DCLGEN . . Invoking DCLGEN through DB2I . . . . . . . . . . . . . . Including the data declarations in your program . . . . . . . . . DCLGEN support of C, COBOL, and PL/I languages . . . . . . . . Example: Adding a table declaration and host-variable structure to a library Step 1. Specify COBOL as the host language . . . . . . . . . . Step 2. Create the table declaration and host structure . . . . . . . Step 3. Examine the results . . . . . . . . . . . . . . . .
| |
| |
Chapter 9. Embedding SQL statements in host languages . . . . . . . . . . . . . . . . . . 143 Coding SQL statements in an assembler application . . . . . . . . . . . . . . . . . . . . . 143 Defining the SQL communications area . . . . . . . . . . . . . . . . . . . . . . . . . 143 If you specify STDSQL(YES) . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 If you specify STDSQL(NO) . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Embedding SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Using host variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Declaring host variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Determining equivalent SQL and assembler data types . . . . . . . . . . . . . . . . . . . 151 Notes on assembler variable declaration and usage . . . . . . . . . . . . . . . . . . . . 154 Determining compatibility of SQL and assembler data types. . . . . . . . . . . . . . . . . . 155 Using indicator variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Handling SQL error return codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Macros for assembler applications . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Coding SQL statements in a C or C++ application . . . . . . . . . . . . . . . . . . . . . . 158 Defining the SQL communication area . . . . . . . . . . . . . . . . . . . . . . . . . 158 If you specify STDSQL(YES) . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 If you specify STDSQL(NO) . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Embedding SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Using host variables and host variable arrays. . . . . . . . . . . . . . . . . . . . . . . 161 Declaring host variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Declaring host variable arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Using host structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Determining equivalent SQL and C data types . . . . . . . . . . . . . . . . . . . . . . 175 Notes on C variable declaration and usage . . . . . . . . . . . . . . . . . . . . . . 179 Notes on syntax differences for constants . . . . . . . . . . . . . . . . . . . . . . . 180 Determining compatibility of SQL and C data types . . . . . . . . . . . . . . . . . . . . 181 Using indicator variables and indicator variable arrays . . . . . . . . . . . . . . . . . . . 182 Handling SQL error return codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Coding considerations for C and C++ . . . . . . . . . . . . . . . . . . . . . . . . . 186 Coding SQL statements in a COBOL application. . . . . . . . . . . . . . . . . . . . . . . 186 Defining the SQL communication area . . . . . . . . . . . . . . . . . . . . . . . . . 186 If you specify STDSQL(YES) . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 If you specify STDSQL(NO) . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Embedding SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Using host variables and host variable arrays. . . . . . . . . . . . . . . . . . . . . . . 191 Declaring host variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Declaring host variable arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Using host structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Determining equivalent SQL and COBOL data types . . . . . . . . . . . . . . . . . . . . 210 Notes on COBOL variable declaration and usage . . . . . . . . . . . . . . . . . . . . 212 Determining compatibility of SQL and COBOL data types . . . . . . . . . . . . . . . . . . 214
74
| |
Using indicator variables and indicator variable arrays . . . . . Handling SQL error return codes . . . . . . . . . . . . . Coding considerations for object-oriented extensions in COBOL . . Coding SQL statements in a Fortran application . . . . . . . . . Defining the SQL communication area . . . . . . . . . . . If you specify STDSQL(YES) . . . . . . . . . . . . . If you specify STDSQL(NO) . . . . . . . . . . . . . Defining SQL descriptor areas . . . . . . . . . . . . . . Embedding SQL statements . . . . . . . . . . . . . . Using host variables . . . . . . . . . . . . . . . . . Declaring host variables . . . . . . . . . . . . . . . . Determining equivalent SQL and Fortran data types . . . . . . Notes on Fortran variable declaration and usage . . . . . . Notes on syntax differences for constants . . . . . . . . . Determining compatibility of SQL and Fortran data types . . . . Using indicator variables . . . . . . . . . . . . . . . Handling SQL error return codes . . . . . . . . . . . . . Coding SQL statements in a PL/I application . . . . . . . . . Defining the SQL communication area . . . . . . . . . . . If you specify STDSQL(YES) . . . . . . . . . . . . . If you specify STDSQL(NO) . . . . . . . . . . . . . Defining SQL descriptor areas . . . . . . . . . . . . . . Embedding SQL statements . . . . . . . . . . . . . . Using host variables and host variable arrays. . . . . . . . . Declaring host variables . . . . . . . . . . . . . . . . Declaring host variable arrays . . . . . . . . . . . . . . Using host structures . . . . . . . . . . . . . . . . . Determining equivalent SQL and PL/I data types . . . . . . . Notes on PL/I variable declaration and usage . . . . . . . Determining compatibility of SQL and PL/I data types . . . . . Using indicator variables and indicator variable arrays . . . . . Handling SQL error return codes . . . . . . . . . . . . . Coding SQL statements in a REXX application . . . . . . . . . Defining the SQL communication area . . . . . . . . . . . Defining SQL descriptor areas . . . . . . . . . . . . . . Accessing the DB2 REXX Language Support application programming Embedding SQL statements in a REXX procedure . . . . . . . Using cursors and statement names . . . . . . . . . . . . Using REXX host variables and data types . . . . . . . . . Determining equivalent SQL and REXX data types . . . . . . Letting DB2 determine the input data type . . . . . . . . Ensuring that DB2 correctly interprets character input data . . . Passing the data type of an input variable to DB2 . . . . . . Retrieving data from DB2 tables . . . . . . . . . . . . Using indicator variables . . . . . . . . . . . . . . . Setting the isolation level of SQL statements in a REXX procedure . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
216 217 219 220 220 220 220 220 221 223 223 225 226 227 227 228 229 230 230 230 230 230 231 233 234 237 240 241 244 245 246 247 249 249 250 250 252 253 254 254 254 255 256 256 257 258
Chapter 10. Using constraints to maintain data integrity . . . . . . . . . . . . . . . . . . . 259 Using check constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Check constraint considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 When check constraints are enforced . . . . . . . . . . . . . . . . . . . . . . . . . 260 How check constraints set CHECK-pending status . . . . . . . . . . . . . . . . . . . . . 260 Using referential constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Parent key columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Defining a parent key and a unique index . . . . . . . . . . . . . . . . . . . . . . . . 262 Incomplete definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Recommendations for defining primary keys . . . . . . . . . . . . . . . . . . . . . . 263 Defining a foreign key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 The relationship name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Indexes on foreign keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 The FOREIGN KEY clause in ALTER TABLE . . . . . . . . . . . . . . . . . . . . . . 265
Part 2.Coding SQL in your host application program
75
. 265
| Maintaining referential integrity when using data encryption . . . . . . . . . . . . . . . . 266 | Referential constraints on tables with multilevel security with row-level granularity . . . . . . . . . . 266 | Using informational referential constraints . . . . . . . . . . . . . . . . . . . . . . . . . 267 | | | | | | | | | | |
Chapter 11. Using DB2-generated values as keys Using ROWID columns as keys . . . . . . . Defining a ROWID column . . . . . . . . Direct row access . . . . . . . . . . . Using identity columns as keys . . . . . . . Defining an identity column . . . . . . . Parent keys and foreign keys . . . . . . . Using values obtained from sequence objects as keys Creating a sequence object . . . . . . . . Referencing a sequence object . . . . . . . Keys across multiple tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 . . . . . . . . . . . . . . . . . . . . . 269 . . . . . . . . . . . . . . . . . . . . . 269 . . . . . . . . . . . . . . . . . . . . . 270 . . . . . . . . . . . . . . . . . . . . . 270 . . . . . . . . . . . . . . . . . . . . . 271 . . . . . . . . . . . . . . . . . . . . . 272 . . . . . . . . . . . . . . . . . . . . . 273 . . . . . . . . . . . . . . . . . . . . . 273 . . . . . . . . . . . . . . . . . . . . . 274 . . . . . . . . . . . . . . . . . . . . . 274
Chapter 12. Using triggers for active data . . . . . . . . . . . . . . . . . . . . . . . . 277 Example of creating and using a trigger . . . . . . . . . . . . . . . . . . . . . . . . . 277 Parts of a trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Trigger name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Subject table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Trigger activation time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Triggering event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Granularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Transition variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Transition tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Triggered action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Trigger condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Trigger body . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Invoking stored procedures and user-defined functions from triggers . . . . . . . . . . . . . . . . 285 Passing transition tables to user-defined functions and stored procedures . . . . . . . . . . . . . . 286 Trigger cascading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Ordering of multiple triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Interactions between triggers and referential constraints . . . . . . . . . . . . . . . . . . . . 288 Interactions between triggers and tables that have multilevel security with row-level granularity . . . . . . 289 Creating triggers to obtain consistent results . . . . . . . . . . . . . . . . . . . . . . . . 290
76
v v v
In addition to these basic requirements, you should also consider the following special topics:
77
v Cursors Chapter 7, Using a cursor to retrieve a set of rows, on page 103 discusses how to use a cursor in your application program to select a set of rows and then process the set either one row at a time or one rowset at a time. v DCLGEN Chapter 8, Generating declarations for your tables using DCLGEN, on page 131 discusses how to use DB2s declarations generator, DCLGEN, to obtain accurate SQL DECLARE statements for tables and views. This section includes information about using SQL in application programs written in assembler, C, C++, COBOL, Fortran, PL/I, and REXX.
For REXX, precede the statement with EXECSQL. If the statement is in a literal string, enclose it in single or double quotation marks.
78
Example: Use EXEC SQL and END-EXEC. to delimit an SQL statement in a COBOL program:
EXEC SQL an SQL statement END-EXEC.
As an alternative to coding the DECLARE statement yourself, you can use DCLGEN, the declarations generator that is supplied with DB2. For more information about using DCLGEN, see Chapter 8, Generating declarations for your tables using DCLGEN, on page 131. When you declare a table or view that contains a column with a distinct type, declare that column with the source type of the distinct type, rather than with the distinct type itself. When you declare the column with the source type, DB2 can check embedded SQL statements that reference that column at precompile time.
79
v Retrieve data into the host variable for your application programs use v Place data into the host variable to insert into a table or to change the contents of a row v Use the data in the host variable when evaluating a WHERE or HAVING clause v Assign the value that is in the host variable to a special register, such as CURRENT SQLID and CURRENT DEGREE v Insert null values in columns using a host indicator variable that contains a negative value v Use the data in the host variable in statements that process dynamic SQL, such as EXECUTE, PREPARE, and OPEN | | | | A host variable array is a data array that is declared in the host language for use within an SQL statement. Using host variable arrays, you can: v Retrieve data into host variable arrays for your application programs use v Place data into host variable arrays to insert rows into a table A host structure is a group of host variables that is referred to by a single name. You can use host structures in all host languages except REXX. Host structures are defined by statements of the host language. You can refer to a host structure in any context where you would refer to the list of host variables in the structure. A host structure reference is equivalent to a reference to each of the host variables within the structure in the order in which they are defined in the structure declaration. This section describes: v Using host variables v Using host variable arrays on page 86 v Using host structures on page 90
80
v Fortran: Declaring host variables on page 223 v PL/I: Declaring host variables on page 234 v REXX: Using REXX host variables and data types on page 254. This section describes the following ways to use host variables: v Retrieving a single row of data into host variables v Updating data using values in host variables on page 82 v Inserting data from column values that use host variables on page 83 v Using indicator variables with host variables on page 83 v Assignments and comparisons using different data types on page 85 v Changing the coded character set ID of host variables on page 85
Note that the host variable CBLEMPNO is preceded by a colon (:) in the SQL statement, but it is not preceded by a colon in the COBOL MOVE statement. In the DATA DIVISION section of a COBOL program, you must declare the host variables CBLEMPNO, CBLNAME, and CBLDEPT to be compatible with the data types in the columns EMPNO, LASTNAME, and WORKDEPT of the DSN8810.EMP table. You can use a host variable to specify a value in a search condition. For this example, you have defined a host variable CBLEMPNO for the employee number, so that you can retrieve the name and the work department of the employee whose number is the same as the value of the host variable, CBLEMPNO; in this case, 000110. If the SELECT ... INTO statement returns more than one row, an error occurs, and any data that is returned is undefined and unpredictable.
81
To prevent undefined and unpredictable data from being returned, you can use the FETCH FIRST 1 ROW ONLY clause to ensure that only one row is returned. For example:
EXEC SQL SELECT LASTNAME, WORKDEPT INTO :CBLNAME, :CBLDEPT FROM DSN8810.EMP FETCH FIRST 1 ROW ONLY END-EXEC.
| | | | | | | | | | | | | | |
You can include an ORDER BY clause in the preceding example. This gives your application some control over which row is returned when you use a FETCH FIRST 1 ROW ONLY clause in a SELECT INTO statement.
EXEC SQL SELECT LASTNAME, WORKDEPT INTO :CBLNAME, :CBLDEPT FROM DSN8810.EMP ORDER BY LASTNAME FETCH FIRST 1 ROW ONLY END-EXEC.
When you specify both the ORDER BY clause and the FETCH FIRST clause, ordering is done first and then the first row is returned. This means that the ORDER BY clause determines which row is returned. If you specify both the ORDER BY clause and the FETCH FIRST clause, ordering is performed on the entire result set before the first row is returned. Example: Specifying expressions in the SELECT clause: When you specify a list of items in the SELECT clause, you can use more than the column names of tables and views. You can request a set of column values mixed with host variable values and constants. For example:
MOVE 4476 TO RAISE. MOVE 000220 TO PERSON. EXEC SQL SELECT EMPNO, LASTNAME, SALARY, :RAISE, SALARY + :RAISE INTO :EMP-NUM, :PERSON-NAME, :EMP-SAL, :EMP-RAISE, :EMP-TTL FROM DSN8810.EMP WHERE EMPNO = :PERSON END-EXEC.
The following results have column headings that represent the names of the host variables:
EMP-NUM ======= 000220 PERSON-NAME =========== LUTZ EMP-SAL ======= 29840 EMP-RAISE ========= 4476 EMP-TTL ======= 34316
Example: Specifying summary values in the SELECT clause: You can request summary values to be returned from aggregate functions. For example:
MOVE D11 TO DEPTID. EXEC SQL SELECT WORKDEPT, AVG(SALARY) INTO :WORK-DEPT, :AVG-SALARY FROM DSN8810.EMP WHERE WORKDEPT = :DEPTID END-EXEC.
82
Example: Updating a single row: The following example changes an employees phone number:
MOVE 4246 TO NEWPHONE. MOVE 000110 TO EMPID. EXEC SQL UPDATE DSN8810.EMP SET PHONENO = :NEWPHONE WHERE EMPNO = :EMPID END-EXEC.
Example: Updating multiple rows: The following example gives the employees in a particular department a salary increase of 10%:
MOVE D11 TO DEPTID. EXEC SQL UPDATE DSN8810.EMP SET SALARY = 1.10 * SALARY WHERE WORKDEPT = :DEPTID END-EXEC.
83
v If the indicator variable contains a positive integer, the retrieved value is truncated, and the integer is the original length of the string. v If the value of the indicator variable is zero, the column value is nonnull. If the column value is a character string, the retrieved value is not truncated. An error occurs if you do not use an indicator variable and DB2 retrieves a null value. You can specify an indicator variable, preceded by a colon, immediately after the host variable. Optionally, you can use the word INDICATOR between the host variable and its indicator variable. Thus, the following two examples are equivalent:
EXEC SQL SELECT PHONENO INTO :CBLPHONE:INDNULL FROM DSN8810.EMP WHERE EMPNO = :EMPID END-EXEC. EXEC SQL SELECT PHONENO INTO :CBLPHONE INDICATOR :INDNULL FROM DSN8810.EMP WHERE EMPNO = :EMPID END-EXEC.
You can then test INDNULL for a negative value. If it is negative, the corresponding value of PHONENO is null, and you can disregard the contents of CBLPHONE. When you use a cursor to fetch a column value, you can use the same technique to determine whether the column value is null. Inserting null values into columns by using host variable indicators: You can use an indicator variable to insert a null value from a host variable into a column. When DB2 processes INSERT and UPDATE statements, it checks the indicator variable (if one exists). If the indicator variable is negative, the column value is null. If the indicator variable is greater than -1, the associated host variable contains a value for the column. For example, suppose your program reads an employee ID and a new phone number, and must update the employee table with the new number. The new number could be missing if the old number is incorrect, but a new number is not yet available. If the new value for column PHONENO might be null, you can use an indicator variable in the UPDATE statement. For example:
EXEC SQL UPDATE DSN8810.EMP SET PHONENO = :NEWPHONE:PHONEIND WHERE EMPNO = :EMPID END-EXEC.
When NEWPHONE contains a non-null value, set PHONEIND to zero by preceding the UPDATE statement with the following line:
MOVE 0 TO PHONEIND.
When NEWPHONE contains a null value, set PHONEIND to a negative value by preceding the UPDATE statement with the following line:
MOVE -1 TO PHONEIND.
Testing for a null column value: You cannot determine whether a column value is null by comparing it to a host variable with an indicator variable that is set to -1.
84
| |
To test whether a column has a null value, use the IS NULL predicate or the IS DISTINCT FROM predicate. For example, the following code does not select the employees who have no phone number:
MOVE -1 TO PHONE-IND. EXEC SQL SELECT LASTNAME INTO :PGM-LASTNAME FROM DSN8810.EMP WHERE PHONENO = :PHONE-HV:PHONE-IND END-EXEC.
You can use the IS NULL predicate to select employees who have no phone number, as in the following statement:
EXEC SQL SELECT LASTNAME INTO :PGM-LASTNAME FROM DSN8810.EMP WHERE PHONENO IS NULL END-EXEC.
| | | | | | | | | | | | | | | | | | | |
To select employees whose phone numbers are equal to the value of :PHONE-HV and employees who have no phone number (as in the second example), you would need to code two predicates, one to handle the non-null values and another to handle the null values, as in the following statement:
EXEC SQL SELECT LASTNAME INTO :PGM-LASTNAME FROM DSN8810.EMP WHERE (PHONENO = :PHONE-HV AND PHONENO IS NOT NULL AND :PHONE-HV IS NOT NULL) OR (PHONENO IS NULL AND :PHONE-HV:PHONE-IND IS NULL) END-EXEC.
You can simplify the preceding example by coding the statement using the NOT form of the IS DISTINCT FROM predicate, as in the following statement:
EXEC SQL SELECT LASTNAME INTO :PGM-LASTNAME FROM DSN8810.EMP WHERE PHONENO IS NOT DISTINCT FROM :PHONE-HV:PHONE-IND END-EXEC.
85
v When you retrieve data from a local or remote table into the host variable, the retrieved data is converted to the CCSID and encoding scheme that are assigned by the DECLARE VARIABLE statement. You can use the DECLARE VARIABLE statement in static or dynamic SQL applications. However, you cannot use the DECLARE VARIABLE statement to control the CCSID and encoding scheme of data that you retrieve or update using an SQLDA. See Changing the CCSID for retrieved data on page 619 for information on changing the CCSID in an SQLDA. When you use a DECLARE VARIABLE statement in a program, put the DECLARE VARIABLE statement after the corresponding host variable declaration and before your first reference to that host variable. Example: Using a DECLARE VARIABLE statement to change the encoding scheme of retrieved data: Suppose that you are writing a C program that runs on a DB2 UDB for z/OS subsystem. The subsystem has an EBCDIC application encoding scheme. The C program retrieves data from the following columns of a local table that is defined with CCSID UNICODE.
PARTNUM CHAR(10) JPNNAME GRAPHIC(10) ENGNAME VARCHAR(30)
Because the application encoding scheme for the subsystem is EBCDIC, the retrieved data is EBCDIC. To make the retrieved data Unicode, use DECLARE VARIABLE statements to specify that the data that is retrieved from these columns is encoded in the default Unicode CCSIDs for the subsystem. Suppose that you want to retrieve the character data in Unicode CCSID 1208 and the graphic data in Unicode CCSID 1200. Use DECLARE VARIABLE statements like these:
EXEC SQL BEGIN DECLARE SECTION; char hvpartnum[11]; EXEC SQL DECLARE :hvpartnum VARIABLE CCSID 1208; sqldbchar hvjpnname[11]; EXEC SQL DECLARE :hvjpnname VARIABLE CCSID 1200; struct { short len; char d[30]; } hvengname; EXEC SQL DECLARE :hvengname VARIABLE CCSID 1208; EXEC SQL END DECLARE SECTION;
The BEGIN DECLARE SECTION and END DECLARE SECTION statements mark the beginning and end of a host variable declare section. | | | | | | | | | | |
86
# # # # # | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Assembler support for the multiple-row FETCH and INSERT statements is limited to the FETCH statement with the USING DESCRIPTOR clause and the dynamic INSERT statement using EXECUTE USING DESCRIPTOR. The DB2 precompiler does not recognize declarations of host variable arrays for Assembler; it recognizes these declarations only in C, COBOL, and PL/I. This section describes the following ways to use host variable arrays: v Retrieving multiple rows of data into host variable arrays v Inserting multiple rows of data from host variable arrays v Using indicator variable arrays with host variable arrays
Assume that the host variable arrays HVA1, HVA2, and HVA3 have been declared and populated with the values that are to be inserted into the ACTNO, ACTKWD, and ACTDESC columns. The NUM-ROWS host variable specifies the number of rows that are to be inserted, which must be less than or equal to the dimension of each host variable array.
87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
-3
DB2 returns a null value because a hole was detected for the corresponding row during a multiple-row FETCH operation.
For information about the multiple-row FETCH operation, see Step 4: Execute SQL statements with a rowset cursor on page 109. For information about holes in the result table of a cursor, see Holes in the result table of a scrollable cursor on page 119. Specifying an indicator array: You can specify an indicator variable array, preceded by a colon, immediately after the host variable array. Optionally, you can use the word INDICATOR between the host variable array and its indicator variable array. Example: Suppose that you declare a scrollable rowset cursor by using the following statement:
EXEC SQL DECLARE CURS1 SCROLL CURSOR WITH ROWSET POSITIONING FOR SELECT PHONENO FROM DSN8810.EMP END-EXEC.
For information about using rowset cursors, see Accessing data by using a rowset-positioned cursor on page 108. The following two specifications of indicator arrays in the multiple-row FETCH statement are equivalent:
EXEC SQL FETCH NEXT ROWSET CURS1 FOR 10 ROWS INTO :CBLPHONE :INDNULL END-EXEC. EXEC SQL FETCH NEXT ROWSET CURS1 FOR 10 ROWS INTO :CBLPHONE INDICATOR :INDNULL END-EXEC.
After the multiple-row FETCH statement, you can test each element of the INDNULL array for a negative value. If an element is negative, you can disregard the contents of the corresponding element in the CBLPHONE host variable array. Inserting null values by using indicator arrays: You can use a negative value in an indicator array to insert a null value into a column. Example: Assume that host variable arrays hva1 and hva2 have been populated with values that are to be inserted into the ACTNO and ACTKWD columns. Assume the ACTDESC column allows nulls. To set the ACTDESC column to null, assign -1 to the elements in its indicator array:
/* Initialize each indicator array */ for (i=0; i<10; i++) { ind1[i] = 0; ind2[i] = 0; ind3[i] = -1; } EXEC SQL INSERT INTO DSN8810.ACT (ACTNO, ACTKWD, ACTDESC) VALUES (:hva1:ind1, :hva2:ind2, :hva3:ind3) FOR 10 ROWS;
88
| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
DB2 ignores the values in the hva3 array and assigns the values in the ARTDESC column to null for the 10 rows that are inserted. Identifying errors during output host variable processing: Output host variable processing is the process of moving data that is retrieved from DB2 (such as from a FETCH) to an application. Errors that occur while processing output host variables do not affect the position of the cursor, and are usually caused by a problem in converting from one data type to another. Example: Suppose that an integer value of 32768 is fetched into a smallint host variable. The conversion might cause an error if you provide insufficient conversion information to DB2. If an indicator variable is provided during output host variable processing or if data type conversion is not required, a positive SQLCODE is returned for the row in most cases. In other cases where data conversion problems occur, a negative SQLCODE is returned for that row. Regardless of the SQLCODE for the row, no new values are assigned to the host variable or to subsequent variables for that row. Any values that are already assigned to variables remain assigned. Even when a negative SQLCODE is returned for a row, statement processing continues and a positive SQLCODE is returned for the statement (SQLSTATE 01668, SQLCODE +354). To determine which rows cause errors when SQLCODE = +354, you can use GET DIAGNOSTICS. Example: Suppose that no indicator variables are provided for values that are returned by the following statement:
FETCH FIRST ROWSET FROM C1 FOR 10 ROWS INTO :hva_col1, :hva_col2;
For each row with an error, a negative SQLCODE is recorded and processing continues until the 10 rows are fetched. When SQLCODE = +354 is returned for the statement, you can use GET DIAGNOSTICS to determine which errors occur for which rows. The following statement returns num_rows = 10 and num_cond = 3:
GET DIAGNOSTICS :num_rows = ROW_COUNT, :num_cond = NUMBER;
Output A
sqlstate = 22003 sqlcode = -304 row_num = 5
Statement B
GET DIAGNOSTICS CONDITION 2 :sqlstate = RETURNED_SQLSTATE, :sqlcode = DB2_RETURNED_SQLCODE, :row_num = DB2_ROW_NUMBER;
Output B
sqlstate = 22003 sqlcode = -802 row_num = 7
Statement C
GET DIAGNOSTICS CONDITION 1 :sqlstate = RETURNED_SQLSTATE, :sqlcode = DB2_RETURNED_SQLCODE, :row_num = DB2_ROW_NUMBER;
89
# # # # # # #
Output C
sqlstate = 01668 sqlcode = +354 row_num = 0
The fifth row has a data mapping error (-304) for column 1 and the seventh row has a data mapping error (-802)for column 2. These rows do not contain valid data, and they should not be used.
If you want to avoid listing host variables, you can substitute the name of a structure, say :PEMP, that contains :EMPNO, :FIRSTNME, :MIDINIT, :LASTNAME, and :WORKDEPT. The example then reads:
EXEC SQL SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT INTO :PEMP FROM DSN8810.VEMP WHERE EMPNO = :EMPID END-EXEC.
You can declare a host structure yourself, or you can use DCLGEN to generate a COBOL record description, PL/I structure declaration, or C structure declaration that corresponds to the columns of a table. For more detailed information about coding a host structure in your program, see Chapter 9, Embedding SQL statements in host languages, on page 143. For more information about using DCLGEN and the restrictions that apply to the C language, see Chapter 8, Generating declarations for your tables using DCLGEN, on page 131.
90
01 INDICATOR-TABLE. PIC S9(4) COMP OCCURS 6 TIMES. . 02 EMP-IND . . MOVE 000230 TO EMPNO. . . . EXEC SQL SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, BIRTHDATE INTO :PEMP-ROW:EMP-IND FROM DSN8810.EMP WHERE EMPNO = :EMPNO END-EXEC.
In this example, EMP-IND is an array containing six values, which you can test for negative values. If, for example, EMP-IND(6) contains a negative value, the corresponding host variable in the host structure (EMP-BIRTHDATE) contains a null value. Because this example selects rows from the table DSN8810.EMP, some of the values in EMP-IND are always zero. The first four columns of each row are defined NOT NULL. In the preceding example, DB2 selects the values for a row of data into a host structure. You must use a corresponding structure for the indicator variables to determine which (if any) selected column values are null. For information on using the IS NULL keyword phrase in WHERE clauses, see Selecting rows using search conditions: WHERE on page 8.
| | |
91
| | | |
v When DB2 processes a FETCH statement, and the FETCH is successful, the contents of SQLERRD(3) in the SQLCA is set to the number of returned rows. v When DB2 processes a multiple-row FETCH statement, the contents of SQLCODE is set to +100 if the last row in the table has been returned with the set of rows. For details, see Accessing data by using a rowset-positioned cursor on page 108. v If SQLWARN0 contains W, DB2 has set at least one of the SQL warning flags (SQLWARN1 through SQLWARNA): SQLWARN1 contains N for non-scrollable cursors and S for scrollable cursors after an OPEN CURSOR or ALLOCATE CURSOR statement. SQLWARN4 contains I for insensitive scrollable cursors, S for sensitive static scrollable cursors, and D for sensitive dynamic scrollable cursors, after an OPEN CURSOR or ALLOCATE CURSOR statement, or blank if the cursor is not scrollable. SQLWARN5 contains a character value of 1 (read only), 2 (read and delete), or 4 (read, delete, and update) to indicate the operation that is allowed on the result table of the cursor. See Appendix D of DB2 SQL Reference for a description of all the fields in the SQLCA.
92
The condition of the WHENEVER statement is one of these three values: SQLWARNING Indicates what to do when SQLWARN0 = W or SQLCODE contains a positive value other than 100. DB2 can set SQLWARN0 for several reasonsfor example, if a column value is truncated when moved into a host variable. Your program might not regard this as an error. SQLERROR Indicates what to do when DB2 returns an error code as the result of an SQL statement (SQLCODE < 0). NOT FOUND Indicates what to do when DB2 cannot find a row to satisfy your SQL statement or when there are no more rows to fetch (SQLCODE = 100). The action of the WHENEVER statement is one of these two values: CONTINUE Specifies the next sequential statement of the source program. # # # # GOTO or GO TO host-label Specifies the statement identified by host-label. For host-label, substitute a single token, preceded by an optional colon. The form of the token depends on the host language. In COBOL, for example, it can be section-name or an unqualified paragraph-name. The WHENEVER statement must precede the first SQL statement it is to affect. However, if your program checks SQLCODE directly, you must check SQLCODE after each SQL statement.
93
For rows in which a conversion or arithmetic expression error does occur, the indicator variable indicates that one or more selected items have no meaningful value. The indicator variable flags this error with a -2 for the affected host variable and an SQLCODE of +802 (SQLSTATE 01519) in the SQLCA. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
In Figure 10, the first GET DIAGNOSTICS statement returns the number of rows inserted and the number of conditions returned. The second GET DIAGNOSTICS statement returns the following items for each condition: SQLCODE, SQLSTATE, and the number of the row (in the rowset that was being inserted) for which the condition occurred. In the activity table, the ACTNO column is defined as SMALLINT. Suppose that
EXEC SQL BEGIN DECLARE SECTION; long row_count, num_condns, i; long ret_sqlcode, row_num; char ret_sqlstate[6]; ... EXEC SQL END DECLARE SECTION; ... EXEC SQL INSERT INTO DSN8810.ACT (ACTNO, ACTKWD, ACTDESC) VALUES (:hva1, :hva2, :hva3) FOR 10 ROWS NOT ATOMIC CONTINUE ON SQLEXCEPTION; EXEC SQL GET DIAGNOSTICS :row_count = ROW_COUNT, :num_condns = NUMBER; printf("Number of rows inserted = %d\n", row_count); for (i=1; i<=num_condns; i++) { EXEC SQL GET DIAGNOSTICS CONDITION :i :ret_sqlcode = DB2_RETURNED_SQLCODE, :ret_sqlstate = RETURNED_SQLSTATE, :row_num = DB2_ROW_NUMBER; printf("SQLCODE = %d, SQLSTATE = %s, ROW NUMBER = %d\n", ret_sqlcode, ret_sqlstate, row_num); } Figure 10. Using GET DIAGNOSTICS to return the number of rows and conditions returned and condition information
you declare the host variable array hva1 as an array with data type long, and you populate the array so that the value for the fourth element is 32768. If you check the SQLCA values after the INSERT statement, the value of SQLCODE is equal to 0, the value of SQLSTATE is 00000, and the value of SQLERRD(3) is 9 for the number of rows that were inserted. However, the INSERT statement specified that 10 rows were to be inserted. The GET DIAGNOSTICS statement provides you with the information that you need to correct the data for the row that was not inserted. The printed output from your program looks like this:
Number of rows inserted = 9 SQLCODE = -302, SQLSTATE = 22003, ROW NUMBER = 4
The value 32768 for the input variable is too large for the target column ACTNO. You can print the MESSAGE_TEXT condition item, or see DB2 Codes for information about SQLCODE -302.
95
| Table 4. Data types for GET DIAGNOSTICS items that return statement information | Item | DB2_GET_DIAGNOSTICS | _DIAGNOSTICS | | | DB2_LAST_ROW | | | | DB2_NUMBER_PARAMETER | _MARKERS | | DB2_NUMBER_RESULT_SETS | | | | DB2_NUMBER_ROWS | | | | | | | | | DB2_RETURN_STATUS | | | | DB2_SQL_ATTR | _CURSOR_HOLD | | | DB2_SQL_ATTR | _CURSOR_ROWSET | | DB2_SQL_ATTR | _CURSOR_SCROLLABLE | | DB2_SQL_ATTR | _CURSOR_SENSITIVITY | | | DB2_SQL_ATTR | _CURSOR_TYPE | | | | MORE | | |
Description After a GET DIAGNOSTICS statement, if any error or warning occurred, this item contains all of the diagnostics as a single string. After a multiple-row FETCH statement, this item contains a value of +100 if the last row in the table is in the rowset that was returned. After a PREPARE statement, this item contains the number of parameter markers in the prepared statement. After a CALL statement that invokes a stored procedure, this item contains the number of result sets that are returned by the procedure. Data type VARCHAR(32672)
INTEGER
INTEGER
INTEGER
After an OPEN or FETCH statement for DECIMAL(31,0) which the size of the result table is known, this item contains the number of rows in the result table. After a PREPARE statement, this item contains the estimated number of rows in the result table for the prepared statement. For SENSITIVE DYNAMIC cursors, this item contains the approximate number of rows. After a CALL statement that invokes an SQL procedure, this item contains the return status if the procedure contains a RETURN statement. After an ALLOCATE or OPEN statement, this item indicates whether the cursor can be held open across multiple units of work (Y or N). After an ALLOCATE or OPEN statement, this item indicates whether the cursor can use rowset positioning (Y or N). After an ALLOCATE or OPEN statement, this item indicates whether the cursor is scrollable (Y or N). After an ALLOCATE or OPEN statement, this item indicates whether the cursor shows updates made by other processes (sensitivity A, I, or S). After an ALLOCATE or OPEN statement, this item indicates whether the cursor is declared static (S for INSENSITIVE or SENSITIVE STATIC) or dynamic (D for SENSITIVE DYNAMIC). After any SQL statement, this item indicates whether some conditions items were discarded because of insufficient storage (Y or N). INTEGER
CHAR(1)
CHAR(1)
CHAR(1)
CHAR(1)
CHAR(1)
CHAR(1)
96
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 4. Data types for GET DIAGNOSTICS items that return statement information (continued) Item NUMBER Description Data type
After any SQL statement, this item INTEGER contains the number of condition items. If no warning or error occurred, or if no previous SQL statement has been executed, the number that is returned is 1. After DELETE, INSERT, UPDATE, or FETCH, this item contains the number of rows that are deleted, inserted, updated, or fetched. After PREPARE, this item contains the estimated number of result rows in the prepared statement. DECIMAL(31,0)
ROW_COUNT
Table 5. Data types for GET DIAGNOSTICS items that return condition information Item CATALOG_NAME Description This item contains the server name of the table that owns a constraint that caused an error, or that caused an access rule or check violation. This item contains the number of the condition. This item contains the name of a cursor in an invalid cursor state. This item contains an internal error code. This item contains an internal error code. This item contains an internal error code. This item contains an internal error code. For some errors, this item contains a negative value that is an internal error pointer. Data type VARCHAR(128)
This item contains the message ID that CHAR(10) corresponds to the message that is contained in the MESSAGE_TEXT diagnostic item. After any SQL statement, this item indicates which module detected the error. After any SQL statement, this item contains the nth token, where n is a value from 1 to 100. CHAR(8) VARCHAR(515)
DB2_REASON_CODE
After any SQL statement, this item contains INTEGER the reason code for errors that have a reason code token in the message text. After any SQL statement, this item contains the SQLCODE for the condition. After any SQL statement that involves multiple rows, this item contains the row number on which DB2 detected the condition. After any SQL statement, this item contains the number of tokens available for the condition. INTEGER DECIMAL(31,0)
DB2_RETURNED_SQLCODE DB2_ROW_NUMBER
DB2_TOKEN_COUNT
INTEGER
97
| Table 5. Data types for GET DIAGNOSTICS items that return condition information (continued) | Item | MESSAGE_TEXT | | | RETURNED_SQLSTATE | | SERVER_NAME | | | | | Item | DB2_AUTHENTICATION_TYPE | | | DB2_AUTHORIZATION_ID | | DB2_CONNECTION_STATE | | DB2_CONNECTION_STATUS | | | DB2_ENCRYPTION_TYPE | | | | | | | | DB2_SERVER_CLASS_NAME | | | DB2_PRODUCT_ID | | | |
Description After any SQL statement, this item contains the message text associated with the SQLCODE. After any SQL statement, this item contains the SQLSTATE for the condition. Data type VARCHAR(32672)
CHAR(5)
After a CONNECT, DISCONNECT, or SET VARCHAR(128) CONNECTION statement, this item contains the name of the server specified in the statement.
| Table 6. Data types for GET DIAGNOSTICS items that return connection information
Description Data type This item contains the authentication type (S, CHAR(1) C, D, E, or blank). For more information, see Chapter 5 of DB2 SQL Reference. This item contains the authorization ID that is used by the connected server. VARCHAR(128)
This item indicates whether the connection is INTEGER unconnected (-1), local (0), or remote (1). This item indicates whether updates can be INTEGER committed for the current unit of work (1 for Yes, 2 for No). This item contains one of the following values that indicates the level of encryption for the connection: A Only the authentication tokens (authid and password) are encrypted D All of the data for the connection is encrypted After a CONNECT or SET CONNECTION statement, this item contains the DB2 server class name. This item contains the DB2 product signature. CHAR(1)
VARCHAR(128)
VARCHAR(8)
For a complete description of the GET DIAGNOSTICS items, see Chapter 5 of DB2 SQL Reference.
| | | |
98
For Assembler programs, see page 157 For C programs, see page 184 For COBOL programs, see page 217 For Fortran programs, see page 229 For PL/I programs, see page 247 DSNTIAR takes data from the SQLCA, formats it into a message, and places the result in a message output area that you provide in your application program. Each time you use DSNTIAR, it overwrites any previous messages in the message output area. You should move or print the messages before using DSNTIAR again, and before the contents of the SQLCA change, to get an accurate view of the SQLCA. DSNTIAR expects the SQLCA to be in a certain format. If your application modifies the SQLCA format before you call DSNTIAR, the results are unpredictable.
. . .
n-1 n Field sizes (in bytes): 2
When you call DSNTIAR, you must name an SQLCA and an output message area in the DSNTIAR parameters. You must also provide the logical record length (lrecl) as a value between 72 and 240 bytes. DSNTIAR assumes the message area contains fixed-length records of length lrecl. DSNTIAR places up to 10 lines in the message area. If the text of a message is longer than the record length you specify on DSNTIAR, the output message splits
Chapter 6. Basics of coding SQL in an application program
99
into several records, on word boundaries if possible. The split records are indented. All records begin with a blank character for carriage control. If you have more lines than the message output area can contain, DSNTIAR issues a return code of 4. A completely blank record marks the end of the message output area.
100
In your error routine, you write a section that checks for SQLCODE -911 or -913. You can receive either of these SQLCODEs when a deadlock or timeout occurs. When one of these errors occurs, the error routine closes your cursors by issuing the statement:
EXEC SQL CLOSE cursor-name
An SQLCODE of 0 or -501 resulting from that statement indicates that the close was successful. To use DSNTIAR to generate the error message text, first follow these steps: 1. Choose a logical record length (lrecl) of the output lines. For this example, assume lrecl is 72 (to fit on a terminal screen) and is stored in the variable named ERROR-TEXT-LEN. 2. Define a message area in your COBOL application. Assuming you want an area for up to 10 lines of length 72, you should define an area of 720 bytes, plus a 2-byte area that specifies the total length of the message output area.
01 ERROR-MESSAGE. 02 ERROR-LEN 02 ERROR-TEXT ERROR-TEXT-LEN PIC S9(4) PIC X(72) COMP VALUE +720. OCCURS 10 TIMES INDEXED BY ERROR-INDEX. PIC S9(9) COMP VALUE +72.
77
For this example, the name of the message area is ERROR-MESSAGE. 3. Make sure you have an SQLCA. For this example, assume the name of the SQLCA is SQLCA. To display the contents of the SQLCA when SQLCODE is 0 or -501, call DSNTIAR after the SQL statement that produces SQLCODE 0 or -501:
CALL DSNTIAR USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.
You can then print the message output area just as you would any other variable. Your message might look like this:
DSNT408I SQLCODE = -501, ERROR: THE CURSOR IDENTIFIED IN A FETCH OR CLOSE STATEMENT IS NOT OPEN DSNT418I SQLSTATE = 24501 SQLSTATE RETURN CODE DSNT415I SQLERRP = DSNXERT SQL PROCEDURE DETECTING ERROR DSNT416I SQLERRD = -315 0 0 -1 0 0 SQL DIAGNOSTIC INFORMATION DSNT416I SQLERRD = XFFFFFEC5 X00000000 X00000000 XFFFFFFFF X00000000 X00000000 SQL DIAGNOSTIC INFORMATION
101
102
103
the rows that are to make up the result table. See Chapter 4 of DB2 SQL Reference for a complete list of clauses that you can use in the SELECT statement. The following example shows a simple form of the DECLARE CURSOR statement:
EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY FROM DSN8810.EMP END-EXEC.
You can use this cursor to list select information about employees. More complicated cursors might include WHERE clauses or joins of several tables. For example, suppose that you want to use a cursor to list employees who work on a certain project. Declare a cursor like this to identify those employees:
EXEC SQL DECLARE C2 CURSOR FOR SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY FROM DSN8810.EMP X WHERE EXISTS (SELECT * FROM DSN8810.PROJ Y WHERE X.EMPNO=Y.RESPEMP AND Y.PROJNO=:GOODPROJ);
| | | | | |
Declaring cursors for tables that use multilevel security: You can declare a cursor that retrieves rows from a table that uses multilevel security with row-level granularity. However, the result table for the cursor contains only those rows that have a security label value that is equivalent to or dominated by the security label value of your ID. Refer to Part 3 (Volume 1) of DB2 Administration Guide for a discussion of multilevel security with row-level granularity. Updating a column: You can update columns in the rows that you retrieve. Updating a row after you use a cursor to retrieve it is called a positioned update. If you intend to perform any positioned updates on the identified table, include the FOR UPDATE clause. The FOR UPDATE clause has two forms: v The first form is FOR UPDATE OF column-list. Use this form when you know in advance which columns you need to update. v The second form is FOR UPDATE, with no column list. Use this form when you might use the cursor to update any of the columns of the table. For example, you can use this cursor to update only the SALARY column of the employee table:
EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY FROM DSN8810.EMP X WHERE EXISTS (SELECT * FROM DSN8810.PROJ Y WHERE X.EMPNO=Y.RESPEMP AND Y.PROJNO=:GOODPROJ) FOR UPDATE OF SALARY;
If you might use the cursor to update any column of the employee table, define the cursor like this:
EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
104
FROM DSN8810.EMP X WHERE EXISTS (SELECT * FROM DSN8810.PROJ Y WHERE X.EMPNO=Y.RESPEMP AND Y.PROJNO=:GOODPROJ) FOR UPDATE;
DB2 must do more processing when you use the FOR UPDATE clause without a column list than when you use the FOR UPDATE clause with a column list. Therefore, if you intend to update only a few columns of a table, your program can run more efficiently if you include a column list. The precompiler options NOFOR and STDSQL affect the use of the FOR UPDATE clause in static SQL statements. For information about these options, see Table 64 on page 484. If you do not specify the FOR UPDATE clause in a DECLARE CURSOR statement, and you do not specify the STDSQL(YES) option or the NOFOR precompiler options, you receive an error if you execute a positioned UPDATE statement. You can update a column of the identified table even though it is not part of the result table. In this case, you do not need to name the column in the SELECT statement. When the cursor retrieves a row (using FETCH) that contains a column value you want to update, you can use UPDATE ... WHERE CURRENT OF to identify the row that is to be updated. Read-only result table: Some result tables cannot be updatedfor example, the result of joining two or more tables. The defining characteristics of a read-only result tables are described in greater detail in the discussion of DECLARE CURSOR in Chapter 5 of DB2 SQL Reference.
If you use the CURRENT DATE, CURRENT TIME, or CURRENT TIMESTAMP special registers in a cursor, DB2 determines the values in those special registers only when it opens the cursor. DB2 uses the values that it obtained at OPEN time for all subsequent FETCH statements. Two factors that influence the amount of time that DB2 requires to process the OPEN statement are: v Whether DB2 must perform any sorts before it can retrieve rows v Whether DB2 uses parallelism to process the SELECT statement of the cursor For more information, see The effect of sorts on OPEN CURSOR on page 835.
105
These codes occur when a FETCH statement has retrieved the last row in the result table and your program issues a subsequent FETCH. For example:
IF SQLCODE = 100 GO TO DATA-NOT-FOUND.
An alternative to this technique is to code the WHENEVER NOT FOUND statement. The WHENEVER NOT FOUND statement causes your program to branch to another part that then issues a CLOSE statement. For example, to branch to label DATA-NOT-FOUND when the FETCH statement does not return a row, use this statement:
EXEC SQL WHENEVER NOT FOUND GO TO DATA-NOT-FOUND END-EXEC.
Your program must anticipate and handle an end-of-data whenever you use a cursor to fetch a row. For more information about the WHENEVER NOT FOUND statement, see Checking the execution of SQL statements on page 91.
The SELECT statement within DECLARE CURSOR statement identifies the result table from which you fetch rows, but DB2 does not retrieve any data until your application program executes a FETCH statement. When your program executes the FETCH statement, DB2 positions the cursor on a row in the result table. That row is called the current row. DB2 then copies the current row contents into the program host variables that you specify on the INTO clause of FETCH. This sequence repeats each time you issue FETCH, until you process all rows in the result table. The row that DB2 points to when you execute a FETCH statement depends on whether the cursor is declared as a scrollable or non-scrollable. See Scrollable and non-scrollable cursors on page 113 for more information. When you query a remote subsystem with FETCH, consider using block fetch for better performance. For more information see Using block fetch in distributed applications on page 458. Block fetch processes rows ahead of the current row. You cannot use a block fetch when you perform a positioned update or delete operation.
106
A positioned UPDATE statement updates the row on which the cursor is positioned. A positioned UPDATE statement is subject to these restrictions: v You cannot update a row if your update violates any unique, check, or referential constraints. v You cannot use an UPDATE statement to modify the rows of a created temporary table. However, you can use an UPDATE statement to modify the rows of a declared temporary table. v If the right side of the SET clause in the UPDATE statement contains a fullselect, that fullselect cannot include a correlated name for a table that is being updated. v You cannot use an INSERT statement in the FROM clause of a SELECT statement that defines a cursor that is used in a positioned UPDATE statement. v A positioned UPDATE statement will fail if the value of the security label column of the row where the cursor is positioned is not equivalent to the security label value of your user id. If your user id has write down privilege, a positioned UPDATE statement will fail if the value of the security label column of the row where the cursor is positioned does not dominate the security label value of your user id.
| | | | | | | |
A positioned DELETE statement deletes the row on which the cursor is positioned. A positioned DELETE statement is subject to these restrictions: v You cannot use a DELETE statement with a cursor to delete rows from a created temporary table. However, you can use a DELETE statement with a cursor to delete rows from a declared temporary table. v After you have deleted a row, you cannot update or delete another row using that cursor until you execute a FETCH statement to position the cursor on another row. v You cannot delete a row if doing so violates any referential constraints. v You cannot use an INSERT statement in the FROM clause of a SELECT statement that defines a cursor that is used in a positioned DELETE statement. v A positioned DELETE statement will fail if the value of the security label column of the row where the cursor is positioned is not equivalent to the security label value of your user id. If your user id has write down privilege, a positioned
Chapter 7. Using a cursor to retrieve a set of rows
| | | | |
107
| | |
DELETE statement will fail if the value of the security label column of the row where the cursor is positioned does not dominate the security label value of your user id.
When you finish processing the rows of the result table, and the cursor is no longer needed, you can let DB2 automatically close the cursor when the current transaction terminates or when your program terminates. Recommendation: To free the resources that are held by the cursor, close the cursor explicitly by issuing the CLOSE statement. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
For restrictions that apply to rowset-positioned cursors and row-positioned cursors, see Step 1: Declare the cursor on page 103.
108
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
When your program executes a FETCH statement with the ROWSET keyword, the cursor is positioned on a rowset in the result table. That rowset is called the current rowset. The dimension of each of the host variable arrays must be greater than or equal to the number of rows to be retrieved.
109
| | | | | | | | | | | | | | | | | | | | | | | | | | | # # # # # # # # # # # # # # # # # # # # # # # # # # |
2. 3. 4. 5.
Dynamically allocate the SQLDA and the arrays needed for the column values. Set the fields in the SQLDA for the column values to be retrieved. Open the cursor. Fetch the rows.
Declare the SQLDA: You must first declare the SQLDA structure. The following SQL INCLUDE statement requests a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA;
Your program must also declare variables that reference the SQLDA structure, the SQLVAR structure within the SQLDA, and the DECLEN structure for the precision and scale if you are retrieving a DECIMAL column. For C programs, the code looks like this:
struct sqlda *sqldaptr; struct sqlvar *varptr; struct DECLEN { unsigned char precision; unsigned char scale; };
Allocate the SQLDA: Before you can set the fields in the SQLDA for the column values to be retrieved, you must dynamically allocate storage for the SQLDA structure. For C programs, the code looks like this:
sqldaptr = (struct sqlda *) malloc (3 * 44 + 16);
The size of the SQLDA is SQLN * 44 + 16, where the value of the SQLN field is the number of output columns. Set the fields in the SQLDA: You must set the fields in the SQLDA structure for your FETCH statement. Suppose you want to retrieve the columns EMPNO, LASTNAME, and SALARY. The C code to set the SQLDA fields for these columns looks like this:
strcpy(sqldaptr->sqldaid,"SQLDA"); sqldaptr->sqldabc = 148; /* number bytes of storage allocated for the SQLDA */ sqldaptr->sqln = 3; /* number of SQLVAR occurrences */ sqldaptr->sqld = 3; varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0])); /* Point to first SQLVAR */ varptr->sqltype = 452; /* data type CHAR(6) */ varptr->sqllen = 6; varptr->sqldata = (char *) hva1; varptr->sqlind = (short *) inda1; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length); varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 1); /* Point to next SQLVAR */ varptr->sqltype = 448; /* data type VARCHAR(15) */ varptr->sqllen = 15; varptr->sqldata = (char *) hva2; varptr->sqlind = (short *) inda2; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length); varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 2); /* Point to next SQLVAR */ varptr->sqltype = 485; /* data type DECIMAL(9,2) */ ((struct DECLEN *) &(varptr->sqllen))->precision = 9; ((struct DECLEN *) &(varptr->sqllen))->scale = 2; varptr->sqldata = (char *) hva3; varptr->sqlind = (short *) inda3; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length);
110
| | | | | | | | | | | | | | | | | # # # # # | | | | | | | | | | | | | | | | | | | | | | |
v SQLDABC indicates the number of bytes of storage that are allocated for the SQLDA. The storage includes a 16-byte header and 44 bytes for each SQLVAR field. The value is SQLN x 44 + 16, or 148 for this example. v SQLN is the number of SQLVAR occurrences (or the number of output columns). v SQLD is the number of variables in the SQLDA that are used by DB2 when processing the FETCH statement. v Each SQLVAR occurrence describes a host variable array or buffer into which the values for a column in the result table are to be returned. Within each SQLVAR: SQLTYPE indicates the data type of the column. SQLLEN indicates the length of the column. If the data type is DECIMAL, this field has two parts: the PRECISION and the SCALE. SQLDATA points to the first element of the array for the column values. For this example, assume that your program allocates the dynamic variable arrays hva1, hva2, and hva3, and their indicator arrays inda1, inda2, and inda3. SQLIND points to the first element of the array of indicator values for the column. If SQLTYPE is an odd number, this attribute is required. (If SQLTYPE is an odd number, null values are allowed for the column.) SQLNAME has two parts: the LENGTH and the DATA. The LENGTH is 8. The first two bytes of the DATA field is X0000. Bytes 5 and 6 of the DATA field are a flag indicating whether the variable is an array or a FOR n ROWS value. Bytes 7 and 8 are a two-byte binary integer representation of the dimension of the array. For information about using the SQLDA in dynamic SQL, see Chapter 24, Coding dynamic SQL in application programs, on page 593. For a complete layout of the SQLDA and the descriptions given by the INCLUDE statement, see Appendix E of DB2 SQL Reference. Open the cursor: You can open the cursor only after all of the fields have been set in the output SQLDA:
EXEC SQL OPEN C1;
Fetch the rows: After the OPEN statement, the program fetches the next rowset:
EXEC SQL FETCH NEXT ROWSET FROM C1 FOR 20 ROWS USING DESCRIPTOR :*sqldaptr;
The USING clause of the FETCH statement names the SQLDA that describes the columns that are to be retrieved.
111
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
EXEC SQL UPDATE DSN8810.EMP SET SALARY = 50000 WHERE CURRENT OF C1 END-EXEC.
When the UPDATE statement is executed, the cursor must be positioned on a row or rowset of the result table. If the cursor is positioned on a row, that row is updated. If the cursor is positioned on a rowset, all of the rows in the rowset are updated. Using the FOR ROW n OF ROWSET clause: An example of a positioned UPDATE statement that uses the FOR ROW n OF ROWSET clause is:
EXEC SQL UPDATE DSN8810.EMP SET SALARY = 50000 FOR CURSOR C1 FOR ROW 5 OF ROWSET END-EXEC.
When the UPDATE statement is executed, the cursor must be positioned on a rowset of the result table. The specified row (in the example, row 5) of the current rowset is updated.
When the DELETE statement is executed, the cursor must be positioned on a row or rowset of the result table. If the cursor is positioned on a row, that row is deleted, and the cursor is positioned before the next row of its result table. If the cursor is positioned on a rowset, all of the rows in the rowset are deleted, and the cursor is positioned before the next rowset of its result table. Using the FOR ROW n OF ROWSET clause: An example of a positioned DELETE statement that uses the FOR ROW n OF ROWSET clause is:
EXEC SQL DELETE FROM DSN8810.EMP FOR CURSOR C1 FOR ROW 5 OF ROWSET END-EXEC.
When the DELETE statement is executed, the cursor must be positioned on a rowset of the result table. The specified row of the current rowset is deleted, and the cursor remains positioned on that rowset. The deleted row (in the example, row 5 of the rowset) cannot be retrieved or updated.
112
| | | | | | | | | | | | | | | | | #
Types of cursors
| You can declare cursors, both row-positioned and rowset-positioned, as scrollable or not scrollable, held or not held, and returnable or not returnable. The following sections discuss these characteristics: v Scrollable and non-scrollable cursors v Held and non-held cursors on page 122 In addition, you can declare a returnable cursor in a stored procedure by including the WITH RETURN clause; the cursor can return result sets to a caller of the stored procedure. For information about returnable cursors, see Chapter 25, Using stored procedures for client/server processing, on page 629.
113
After the application executes a positioned UPDATE or positioned DELETE statement, the cursor stays at the current row (or rowset) of the result table. You cannot retrieve rows (or rowsets) backward or move to a specific position in a result table with a non-scrollable cursor.
Declaring a scrollable cursor with the INSENSITIVE keyword has the following effects: v The size, the order of the rows, and the values for each row of the result table do not change after the application opens the cursor. v The result table is read-only. Therefore, you cannot declare the cursor with the FOR UPDATE clause, and you cannot use the cursor for positioned update or delete operations. Figure 13 shows a declaration for a sensitive static scrollable cursor.
EXEC SQL DECLARE C2 SENSITIVE STATIC SCROLL CURSOR FOR SELECT DEPTNO, DEPTNAME, MGRNO FROM DSN8810.DEPT ORDER BY DEPTNO END-EXEC. Figure 13. Declaration for a sensitive static scrollable row cursor
Declaring a cursor as SENSITIVE STATIC has the following effects: v When the application executes positioned UPDATE and DELETE statements with the cursor, those changes are visible in the result table. v When the current value of a row no longer satisfies the SELECT statement that was used in the cursor declaration, that row is no longer visible in the result table. v When a row of the result table is deleted from the underlying table, that row is no longer visible in the result table.
114
v Changes that are made to the underlying table by other cursors or other application processes can be visible in the result table, depending on whether the FETCH statements that you use with the cursor are FETCH INSENSITIVE or FETCH SENSITIVE statements. | | | | | | | | | | | | | | | | | | | | | | | | | Figure 14 shows a declaration for a sensitive dynamic scrollable cursor.
EXEC SQL DECLARE C2 SENSITIVE DYNAMIC SCROLL CURSOR FOR SELECT DEPTNO, DEPTNAME, MGRNO FROM DSN8810.DEPT ORDER BY DEPTNO END-EXEC. Figure 14. Declaration for a sensitive dynamic scrollable cursor
Declaring a cursor as SENSITIVE DYNAMIC has the following effects: v When the application executes positioned UPDATE and DELETE statements with the cursor, those changes are visible. In addition, when the application executes INSERT, UPDATE, and DELETE statements (within the application but outside the cursor), those changes are visible. v All committed inserts, updates, and deletes by other application processes are visible. v Because the FETCH statement executes against the base table, the cursor needs no temporary result table. When you define a cursor as SENSITIVE DYNAMIC, you cannot specify the INSENSITIVE keyword in a FETCH statement for that cursor. v If you specify an ORDER BY clause for a SENSITIVE DYNAMIC cursor, DB2 might choose an index access path if the ORDER BY is fully satisfied by an existing index. However, a dynamic scrollable cursor that is declared with an ORDER BY clause is not updatable. Static scrollable cursor: Both the INSENSITIVE cursor and the SENSITIVE STATIC cursor follow the static cursor model: v The size of the result table does not grow after the application opens the cursor. Rows that are inserted into the underlying table are not added to the result table. v The order of the rows does not change after the application opens the cursor. If the cursor declaration contains an ORDER BY clause, and the columns that are in the ORDER BY clause are updated after the cursor is opened, the order of the rows in the result table does not change.
| | | | | | | | | | | |
Dynamic scrollable cursor: When you declare a cursor as SENSITIVE, you can declare it either STATIC or DYNAMIC. The SENSITIVE DYNAMIC cursor follows the dynamic cursor model: v The size and contents of the result table can change with every fetch. The base table can change while the cursor is scrolling on it. If another application process changes the data, the cursor sees the newly changed data when it is committed. If the application process of the cursor changes the data, the cursor sees the newly changed data immediately. v The order of the rows can change after the application opens the cursor. If the cursor declaration contains an ORDER BY clause, and columns that are in the ORDER BY clause are updated after the cursor is opened, the order of the rows in the result table changes.
115
Determining attributes of a cursor by checking the SQLCA: After you open a cursor, you can determine the following attributes of the cursor by checking the following SQLWARN and SQLERRD fields of the SQLCA: SQLWARN1 Indicates whether the cursor is scrollable or non-scrollable. | | | SQLWARN4 Indicates whether the cursor is insensitive (I), sensitive static (S), or sensitive dynamic (D). SQLWARN5 Indicates whether the cursor is read-only, readable and deletable, or readable, deletable, and updatable. | | | | | | | | | | | SQLERRD(1) The number of rows in the result table of a cursor when the cursor position is after the last row (when SQLCODE is equal to +100). This field is not set for dynamic scrollable cursors. SQLERRD(2) The number of rows in the result table of a cursor when the cursor position is after the last row (when SQLCODE is equal to +100). This field is not set for dynamic scrollable cursors. SQLERRD(3) The number of rows in the result table of an INSERT when the SELECT statement of the cursor contains the INSERT statement. If the OPEN statement executes with no errors or warnings, DB2 does not set SQLWARN0 when it sets SQLWARN1, SQLWARN4, or SQLWARN5. See Appendix D of DB2 SQL Reference for specific information about fields in the SQLCA. Determining attributes of a cursor by using the GET DIAGNOSTICS statement: After you open a cursor, you can determine the following attributes of the cursor by checking these GET DIAGNOSTICS items: DB2_SQL_ATTR_CURSOR_HOLD Indicates whether the cursor can be held open across commits (Y or N) DB2_SQL_ATTR_CURSOR_ROWSET Indicates whether the cursor can use rowset positioning (Y or N) DB2_SQL_ATTR_CURSOR_SCROLLABLE Indicates whether the cursor is scrollable (Y or N) DB2_SQL_ATTR_CURSOR_SENSITIVITY Indicates whether the cursor is asensitive, insensitive, or sensitive to changes that are made by other processes (A, I, or S) DB2_SQL_ATTR_CURSOR_TYPE Indicates whether the cursor is declared static (S for INSENSITIVE or SENSITIVE STATIC) or dynamic (D for SENSITIVE DYNAMIC) For more information about the GET DIAGNOSTICS statement, see The GET DIAGNOSTICS statement on page 94. Retrieving rows with a scrollable cursor: When you open any cursor, the cursor is positioned before the first row of the result table. You move a scrollable cursor around in the result table by specifying a fetch orientation keyword in a FETCH statement. A fetch orientation keyword indicates the absolute or relative position of the cursor when the FETCH statement is executed. Table 7 on page 117 lists the
| | | | | | | | | | | | | | | | | |
116
| | |
fetch orientation keywords that you can specify and their meanings. These keywords apply to both row-positioned scrollable cursors and rowset-positioned scrollable cursors.
Table 7. Positions for a scrollable cursor Keyword in FETCH statement BEFORE FIRST or ABSOLUTE +1 LAST or ABSOLUTE 1 AFTER ABSOLUTE RELATIVE2 CURRENT PRIOR or RELATIVE 1 NEXT Notes:
2
Cursor position when FETCH is executed1 Before the first row On the first row On the last row After the last row On an absolute row number, from before the first row forward or from after the last row backward On the row that is forward or backward a relative number of rows from the current row On the current row On the previous row On the next row (default)
| |
1. The cursor position applies to both row position and rowset position, for example, before the first row or before the first rowset. 2. ABSOLUTE and RELATIVE are described in greater detail in the discussion of FETCH in Chapter 5 of DB2 SQL Reference.
Example: To use the cursor that is declared in Figure 12 on page 114 to fetch the fifth row of the result table, use a FETCH statement like this:
EXEC SQL FETCH ABSOLUTE +5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
To fetch the fifth row from the end of the result table, use this FETCH statement:
EXEC SQL FETCH ABSOLUTE -5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
| | | | | | | | | |
Determining the number of rows in the result table for a static scrollable cursor: You can determine how many rows are in the result table of an INSENSITIVE or SENSITIVE STATIC scrollable cursor. To do that, execute a FETCH statement, such as FETCH AFTER, that positions the cursor after the last row. You can then examine the fields SQLERRD(1) and SQLERRD(2) in the SQLCA (fields sqlerrd[0] and sqlerrd[1] for C and C++) for the number of rows in the result table. Alternatively, you can use the GET DIAGNOSTICS statement to retrieve the number of rows in the ROW_COUNT statement item. FETCH statement interaction between row and rowset positioning: When you declare a cursor with the WITH ROWSET POSITIONING clause, you can intermix row-positioned FETCH statements with rowset-positioned FETCH statements. For information about using a multiple-row FETCH statement, see Using a multiple-row FETCH statement with host variable arrays on page 109. Table 8 on page 118 shows the interaction between row and rowset positioning for a scrollable cursor. Assume that you declare the scrollable cursor on a table with 15 rows.
117
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 8. Interaction between row and rowset positioning for a scrollable cursor Keywords in FETCH statement FIRST FIRST ROWSET FIRST ROWSET FOR 5 ROWS CURRENT ROWSET CURRENT NEXT (default) NEXT ROWSET NEXT ROWSET FOR 3 ROWS NEXT ROWSET LAST LAST ROWSET FOR 2 ROWS PRIOR ROWSET ABSOLUTE 2 ROWSET STARTING AT ABSOLUTE 2 FOR 3 ROWS RELATIVE 2 ROWSET STARTING AT ABSOLUTE 2 FOR 4 ROWS RELATIVE -1 ROWSET STARTING AT ABSOLUTE 3 FOR 2 ROWS ROWSET STARTING AT RELATIVE 4 PRIOR ROWSET STARTING AT ABSOLUTE 13 FOR 5 ROWS FIRST ROWSET Cursor position when FETCH is executed On row 1 On a rowset of size 1, consisting of row 1 On a rowset of size 5, consisting of rows 1, 2, 3, 4, and 5 On a rowset of size 5, consisting of rows 1, 2, 3, 4, and 5 On row 1 On row 2 On a rowset of size 1, consisting of row 3 On a rowset of size 3, consisting of rows 4, 5, and 6 On a rowset of size 3, consisting of rows 7, 8, and 9 On row 15 On a rowset of size 2, consisting of rows 14 and 15 On a rowset of size 2, consisting of rows 12 and 13 On row 2 On a rowset of size 3, consisting of rows 2, 3, and 4 On row 4 On a rowset of size 4, consisting of rows 2, 3, 4, and 5 On row 1 On a rowset of size 2, consisting of rows 3 and 4 On a rowset of size 2, consisting of rows 7 and 8 On row 6 On a rowset of size 3, consisting of rows 13, 14, and 15 On a rowset of size 5, consisting of rows 1, 2, 3, 4, and 5
Note: The FOR n ROWS clause and the ROWSET clause are described in greater detail in the discussion of FETCH in Chapter 5 of DB2 SQL Reference.
118
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
When you declare a cursor as SENSITIVE DYNAMIC, changes that other processes or cursors make to the underlying table are visible to the result table after the changes are committed. Table 9 summarizes the sensitivity values and their effects on the result table of a scrollable cursor.
Table 9. How sensitivity affects the result table for a scrollable cursor DECLARE sensitivity INSENSITIVE FETCH INSENSITIVE No changes to the underlying table are visible in the result table. Positioned UPDATE and DELETE statements using the cursor are not allowed. Only positioned updates and deletes that are made by the cursor are visible in the result table. Not valid. FETCH SENSITIVE Not valid.
SENSITIVE STATIC
All updates and deletes are visible in the result table. Inserts made by other processes are not visible in the result table. All committed changes are visible in the result table, including updates, deletes, inserts, and changes in the order of the rows.
SENSITIVE DYNAMIC
119
Now suppose that you declare the following SENSITIVE STATIC scrollable cursor, which you use to delete rows from A:
EXEC SQL DECLARE C3 SENSITIVE STATIC SCROLL CURSOR FOR SELECT COL1 FROM A FOR UPDATE OF COL1;
The positioned delete statement creates a delete hole, as shown in Figure 16.
After you execute the positioned delete statement, the third row is deleted from the result table, but the result table does not shrink to fill the space that the deleted row creates. Creating an update hole with a static scrollable cursor: Suppose that you declare the following SENSITIVE STATIC scrollable cursor, which you use to update rows in A:
EXEC SQL DECLARE C4 SENSITIVE STATIC SCROLL CURSOR FOR SELECT COL1 FROM A WHERE COL1<6;
120
The searched UPDATE statement creates an update hole, as shown in Figure 17.
After you execute the searched UPDATE statement, the last row no longer qualifies for the result table, but the result table does not shrink to fill the space that the disqualified row creates. Removing a delete hole or an update hole: You can remove a delete hole or an update hole in specific situations. If you try to fetch from a delete hole, DB2 issues an SQL warning. If you try to update or delete a delete hole, DB2 issues an SQL error. You can remove a delete hole only by opening the scrollable cursor, setting a savepoint, executing a positioned DELETE statement with the scrollable cursor, and rolling back to the savepoint. If you try to fetch from an update hole, DB2 issues an SQL warning. If you try to delete an update hole, DB2 issues an SQL error. However, you can convert an update hole back to a result table row by updating the row in the base table, as shown in Figure 18 on page 122. You can update the base table with a searched UPDATE statement in the same application process, or a searched or positioned UPDATE statement in another application process. After you update the base table, if the row qualifies for the result table, the update hole disappears.
121
A hole becomes visible to a cursor when a cursor operation returns a non-zero SQLCODE. The point at which a hole becomes visible depends on the following factors: v Whether the scrollable cursor creates the hole v Whether the FETCH statement is FETCH SENSITIVE or FETCH INSENSITIVE If the scrollable cursor creates the hole, the hole is visible when you execute a FETCH statement for the row that contains the hole. The FETCH statement can be FETCH INSENSITIVE or FETCH SENSITIVE. If an update or delete operation outside the scrollable cursor creates the hole, the hole is visible at the following times: v If you execute a FETCH SENSITIVE statement for the row that contains the hole, the hole is visible when you execute the FETCH statement. v If you execute a FETCH INSENSITIVE statement, the hole is not visible when you execute the FETCH statement. DB2 returns the row as it was before the update or delete operation occurred. However, if you follow the FETCH INSENSITIVE statement with a positioned UPDATE or DELETE statement, the hole becomes visible.
122
| | | | | | |
A static scrollable cursor that is held is positioned on the last retrieved row. The last retrieved row can be returned from the result table with a FETCH CURRENT statement. v A dynamic scrollable cursor that is held is positioned after the last retrieved row and before the next logical row. The next row can be returned from the result table with a FETCH NEXT statement. DB2 returns SQLCODE +231 for a FETCH CURRENT statement. v A held cursor can close when: v You issue a CLOSE cursor, ROLLBACK, or CONNECT statement v You issue a CAF CLOSE function call or an RRSAF TERMINATE THREAD function call v The application program terminates. If the program abnormally terminates, the cursor position is lost. To prepare for restart, your program must reposition the cursor. The following restrictions apply to cursors that are declared WITH HOLD: v Do not use DECLARE CURSOR WITH HOLD with the new user signon from a DB2 attachment facility, because all open cursors are closed. v Do not declare a WITH HOLD cursor in a thread that might become inactive. If you do, its locks are held indefinitely.
IMS You cannot use DECLARE CURSOR...WITH HOLD in message processing programs (MPP) and message-driven batch message processing (BMP). Each message is a new user for DB2; whether or not you declare them using WITH HOLD, no cursors continue for new users. You can use WITH HOLD in non-message-driven BMP and DL/I batch programs.
CICS In CICS applications, you can use DECLARE CURSOR...WITH HOLD to indicate that a cursor should not close at a commit or sync point. However, SYNCPOINT ROLLBACK closes all cursors, and end-of-task (EOT) closes all cursors before DB2 reuses or terminates the thread. Because pseudo-conversational transactions usually have multiple EXEC CICS RETURN statements and thus span multiple EOTs, the scope of a held cursor is limited. Across EOTs, you must reopen and reposition a cursor declared WITH HOLD, as if you had not specified WITH HOLD. You should always close cursors that you no longer need. If you let DB2 close a CICS attachment cursor, the cursor might not close until the CICS attachment facility reuses or terminates the thread. The following cursor declaration causes the cursor to maintain its position in the DSN8810.EMP table after a commit point:
EXEC SQL DECLARE EMPLUPDT CURSOR WITH HOLD FOR SELECT EMPNO, LASTNAME, PHONENO, JOB, SALARY, WORKDEPT
123
124
************************************************** * Declare a cursor that will be used to update * * the JOB column of the EMP table. * ************************************************** EXEC SQL DECLARE THISEMP CURSOR FOR SELECT EMPNO, LASTNAME, WORKDEPT, JOB FROM DSN8810.EMP WHERE WORKDEPT = D11 FOR UPDATE OF JOB END-EXEC. ************************************************** * Open the cursor * ************************************************** EXEC SQL OPEN THISEMP END-EXEC. ************************************************** * Indicate what action to take when all rows * * in the result table have been fetched. * ************************************************** EXEC SQL WHENEVER NOT FOUND GO TO CLOSE-THISEMP END-EXEC. ************************************************** * Fetch a row to position the cursor. * ************************************************** EXEC SQL FETCH FROM THISEMP INTO :EMP-NUM, :NAME2, :DEPT, :JOB-NAME END-EXEC. ************************************************** * Update the row where the cursor is positioned. * ************************************************** EXEC SQL UPDATE DSN8810.EMP SET JOB = :NEW-JOB WHERE CURRENT OF THISEMP END-EXEC. . . . ************************************************** * Branch back to fetch and process the next row. * ************************************************** . . . ************************************************** * Close the cursor * ************************************************** CLOSE-THISEMP. EXEC SQL CLOSE THISEMP END-EXEC. Figure 19. Performing cursor operations with a non-scrollable cursor
Figure 20 on page 126 shows how to retrieve data backward with a cursor.
125
************************************************** * Declare a cursor to retrieve the data backward * * from the EMP table. The cursor has access to * * changes by other processes. * ************************************************** EXEC SQL DECLARE THISEMP SENSITIVE STATIC SCROLL CURSOR FOR SELECT EMPNO, LASTNAME, WORKDEPT, JOB FROM DSN8810.EMP END-EXEC. ************************************************** * Open the cursor * ************************************************** EXEC SQL OPEN THISEMP END-EXEC. ************************************************** * Indicate what action to take when all rows * * in the result table have been fetched. * ************************************************** EXEC SQL WHENEVER NOT FOUND GO TO CLOSE-THISEMP END-EXEC. ************************************************** * Position the cursor after the last row of the * * result table. This FETCH statement cannot * * include the SENSITIVE or INSENSITIVE keyword * * and cannot contain an INTO clause. * ************************************************** EXEC SQL FETCH AFTER FROM THISEMP END-EXEC. ************************************************** * Fetch the previous row in the table. * ************************************************** EXEC SQL FETCH SENSITIVE PRIOR FROM THISEMP INTO :EMP-NUM, :NAME2, :DEPT, :JOB-NAME END-EXEC. ************************************************** * Check that the fetched row is not a hole * * (SQLCODE +222). If not, print the contents. * ************************************************** IF SQLCODE IS GREATER THAN OR EQUAL TO 0 AND SQLCODE IS NOT EQUAL TO +100 AND SQLCODE IS NOT EQUAL TO +222 THEN . PERFORM PRINT-RESULTS. . . ************************************************** * Branch back to fetch the previous row. * ************************************************** . . . ************************************************** * Close the cursor * ************************************************** CLOSE-THISEMP. EXEC SQL CLOSE THISEMP END-EXEC. Figure 20. Performing cursor operations with a SENSITIVE STATIC scrollable cursor
Figure 21 on page 127 shows how to update an entire rowset with a cursor.
126
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
************************************************** * Declare a rowset cursor to update the JOB * * column of the EMP table. * ************************************************** EXEC SQL DECLARE EMPSET CURSOR WITH ROWSET POSITIONING FOR SELECT EMPNO, LASTNAME, WORKDEPT, JOB FROM DSN8810.EMP WHERE WORKDEPT = D11 FOR UPDATE OF JOB END-EXEC. ************************************************** * Open the cursor. * ************************************************** EXEC SQL OPEN EMPSET END-EXEC. ************************************************** * Indicate what action to take when end-of-data * * occurs in the rowset being fetched. * ************************************************** EXEC SQL WHENEVER NOT FOUND GO TO CLOSE-EMPSET END-EXEC. ************************************************** * Fetch next rowset to position the cursor. * ************************************************** EXEC SQL FETCH NEXT ROWSET FROM EMPSET FOR :SIZE-ROWSET ROWS INTO :HVA-EMPNO, :HVA-LASTNAME, :HVA-WORKDEPT, :HVA-JOB END-EXEC. ************************************************** * Update rowset where the cursor is positioned. * ************************************************** UPDATE-ROWSET. EXEC SQL UPDATE DSN8810.EMP SET JOB = :NEW-JOB WHERE CURRENT OF EMPSET END-EXEC. .END-UPDATE-ROWSET. . . ************************************************** * Branch back to fetch the next rowset. * ************************************************** . . . ************************************************** * Update the remaining rows in the current * * rowset and close the cursor. * ************************************************** CLOSE-EMPSET. PERFORM UPDATE-ROWSET. EXEC SQL CLOSE EMPSET END-EXEC. Figure 21. Performing positioned update with a rowset cursor
Figure 22 on page 128 shows how to update specific rows with a rowset cursor.
127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
***************************************************** * Declare a static scrollable rowset cursor. * ***************************************************** EXEC SQL DECLARE EMPSET SENSITIVE STATIC SCROLL CURSOR WITH ROWSET POSITIONING FOR SELECT EMPNO, WORKDEPT, JOB FROM DSN8810.EMP FOR UPDATE OF JOB END-EXEC. ***************************************************** * Open the cursor. * ***************************************************** EXEC SQL OPEN EMPSET END-EXEC. ***************************************************** * Fetch next rowset to position the cursor. * ***************************************************** EXEC SQL FETCH SENSITIVE NEXT ROWSET FROM EMPSET FOR :SIZE-ROWSET ROWS INTO :HVA-EMPNO, :HVA-WORKDEPT :INDA-WORKDEPT, :HVA-JOB :INDA-JOB END-EXEC. ***************************************************** * Process fetch results if no error and no hole. * ***************************************************** IF SQLCODE >= 0 EXEC SQL GET DIAGNOSTICS :HV-ROWCNT = ROW_COUNT END-EXEC PERFORM VARYING N FROM 1 BY 1 UNTIL N > HV-ROWCNT IF INDA-WORKDEPT(N) NOT = -3 EVALUATE HVA-WORKDEPT(N) WHEN (D11) PERFORM UPDATE-ROW WHEN (E11) PERFORM DELETE-ROW END-EVALUATE END-IF END-PERFORM IF SQLCODE = 100 GO TO CLOSE-EMPSET END-IF ELSE EXEC SQL GET DIAGNOSTICS :HV-NUMCOND = NUMBER END-EXEC PERFORM VARYING N FROM 1 BY 1 UNTIL N > HV-NUMCOND EXEC SQL GET DIAGNOSTICS CONDITION :N :HV-SQLCODE = DB2_RETURNED_SQLCODE, :HV-ROWNUM = DB2_ROW_NUMBER END-EXEC DISPLAY "SQLCODE = " HV-SQLCODE DISPLAY "ROW NUMBER = " HV-ROWNUM END-PERFORM GO TO CLOSE-EMPSET END-IF. Figure 22. Performing positioned update and delete with a sensitive rowset cursor (Part 1 of 2)
128
. . . ***************************************************** * Branch back to fetch and process * * the next rowset. * ***************************************************** . . . ***************************************************** * Update row N in current rowset. * ***************************************************** UPDATE-ROW. EXEC SQL UPDATE DSN8810.EMP SET JOB = :NEW-JOB FOR CURSOR EMPSET FOR ROW :N OF ROWSET END-EXEC. END-UPDATE-ROW. ***************************************************** * Delete row N in current rowset. * ***************************************************** DELETE-ROW. EXEC SQL DELETE FROM DSN8810.EMP FOR CURSOR EMPSET FOR ROW :N OF ROWSET END-EXEC. .END-DELETE-ROW. . . ***************************************************** * Close the cursor. * ***************************************************** CLOSE-EMPSET. EXEC SQL CLOSE EMPSET END-EXEC.
Figure 22. Performing positioned update and delete with a sensitive rowset cursor (Part 2 of 2)
129
130
# # # # # # # #
131
Enter table name for which declarations are required: 1 SOURCE TABLE NAME ===> 2 TABLE OWNER ..... ===> (Optional) (Can be sequential or partitioned) (If password protected) (ADD new or REPLACE old declaration) (Enter YES for column label) (Optional) (Optional) (Enter YES to delimit DBCS identifiers) (Enter YES to append column name) (Enter YES for indicator variables) (Enter YES to change additional options) HELP for more information
3 AT LOCATION ..... ===> Enter destination data set: 4 DATA SET NAME ... ===> 5 DATA SET PASSWORD ===> Enter options as desired: 6 ACTION .......... ===> ADD 7 COLUMN LABEL .... ===> NO 8 STRUCTURE NAME .. ===> 9 FIELD NAME PREFIX ===> 10 DELIMIT DBCS .... ===> YES 11 COLUMN SUFFIX ... ===> NO 12 INDICATOR VARS .. ===> NO 13 ADDITIONAL OPTIONS===> YES PRESS: ENTER to process
END to exit
Fill in the DCLGEN panel as follows: 1 SOURCE TABLE NAME Is the unqualified name of the table, view, or created temporary table for which you want DCLGEN to produce SQL data declarations. The table can be stored at your DB2 location or at another DB2 location. To specify a table name at another DB2 location, enter the table qualifier in the TABLE OWNER field and the location name in the AT LOCATION field. DCLGEN generates a three-part table name from the SOURCE TABLE NAME, TABLE OWNER, and AT LOCATION fields. You can also use an alias for a table name. To specify a table name that contains special characters or blanks, enclose the name in apostrophes. If the name contains apostrophes, you must double each one( ). For example, to specify a table named DONS TABLE, enter the following:
DONS TABLE
You do not need to enclose DBCS table names in apostrophes. If you do not enclose the table name in apostrophes, DB2 converts lowercase characters to uppercase. The underscore is not handled as a special character in DCLGEN. For example, the table name JUNE_PROFITS does not need to be enclosed in apostrophes. Because COBOL field names cannot contain underscores, DCLGEN substitutes hyphens (-) for single-byte underscores in COBOL field names that are built from the table name. 2 TABLE OWNER Is the owner of the source table. If you do not specify this value and the
132
table is a local table, DB2 assumes that the table qualifier is your TSO logon ID. If the table is at a remote location, you must specify this value. 3 AT LOCATION Is the location of a table or view at another DB2 subsystem. If you specify this parameter, you must also specify a qualified name in the SOURCE TABLE NAME field. The value of the AT LOCATION field becomes a prefix for the table name on the SQL DECLARE statement, as follows:
location_name.owner_id.table_name
The default is the local location name. This field applies to DB2 private protocol access only. The location you name must be another DB2 UDB for z/OS. 4 DATA SET NAME Is the name of the data set you allocated to contain the declarations that DCLGEN produces. You must supply a name; no default exists. The data set must already exist, be accessible to DCLGEN, and can be either sequential or partitioned. If you do not enclose the data set name in apostrophes, DCLGEN adds a standard TSO prefix (user ID) and suffix (language). DCLGEN knows what the host language is from the DB2I defaults panel. For example, for library name LIBNAME(MEMBNAME), the name becomes:
userid.libname.language(membname)
If this data set is password protected, you must supply the password in the DATA SET PASSWORD field. 5 DATA SET PASSWORD Is the password for the data set in the DATA SET NAME field, if the data set is password protected. It is not displayed on your terminal, and it is not recognized if you issued it from a previous session. 6 ACTION Tells DCLGEN what to do with the output when it is sent to a partitioned data set. (The option is ignored if the data set you specify in the DATA SET NAME field is sequential.) ADD indicates that an old version of the output does not exist and creates a new member with the specified data set name. This is the default. REPLACE replaces an old version, if it already exists. If the member does not exist, this option creates a new member. 7 COLUMN LABEL Tells DCLGEN whether to include labels that are declared on any columns of the table or view as comments in the data declarations. (The SQL statement LABEL ON creates column labels to use as supplements to column names.) Use: YES to include column labels. NO to ignore column labels. This is the default.
Chapter 8. Generating declarations for your tables using DCLGEN
133
8 STRUCTURE NAME Is the name of the generated data structure. The name can be up to 31 characters. If the name is not a DBCS string, and the first character is not alphabetic, then enclose the name in apostrophes. If you use special characters, be careful to avoid name conflicts. If you leave this field blank, DCLGEN generates a name that contains the table or view name with a prefix of DCL. If the language is COBOL or PL/I, and the table or view name consists of a DBCS string, the prefix consists of DBCS characters. For C, lowercase characters you enter in this field do not fold to uppercase. 9 FIELD NAME PREFIX Specifies a prefix that DCLGEN uses to form field names in the output. For example, if you choose ABCDE, the field names generated are ABCDE1, ABCDE2, and so on. DCLGEN accepts a field name prefix of up to 28 bytes that can include special and double-byte characters. If you specify a single-byte or mixed-string prefix and the first character is not alphabetic, apostrophes must enclose the prefix. If you use special characters, avoid name conflicts. For COBOL and PL/I, if the name is a DBCS string, DCLGEN generates DBCS equivalents of the suffix numbers. For C, lowercase characters you enter in this field do not fold to uppercase. If you leave this field blank, the field names are the same as the column names in the table or view. 10 DELIMIT DBCS Tells DCLGEN whether to delimit DBCS table names and column names in the table declaration. Use: YES to enclose the DBCS table and column names with SQL delimiters. NO to not delimit the DBCS table and column names. 11 COLUMN SUFFIX Tells DCLGEN whether to form field names by attaching the column name as a suffix to the value you specify in FIELD NAME PREFIX. For example, if you specify YES, the field name prefix is NEW, and the column name is EMPNO, the field name is NEWEMPNO. If you specify YES, you must also enter a value in FIELD NAME PREFIX. If you do not enter a field name prefix, DCLGEN issues a warning message and uses the column names as the field names. The default is NO, which does not use the column name as a suffix and allows the value in FIELD NAME PREFIX to control the field names, if specified. 12 INDICATOR VARS Tells DCLGEN whether to generate an array of indicator variables for the host variable structure. If you specify YES, the array name is the table name with a prefix of I (or DBCS letter <I> if the table name consists solely of double-byte characters). The form of the data declaration depends on the language: For a C program: short int Itable-name[n]; For a COBOL program: 01 Itable-name PIC S9(4) USAGE COMP OCCURS n TIMES. For a PL/I program: DCL Itable-name(n) BIN FIXED(15);
134
n is the number of columns in the table. For example, suppose you define the following table:
CREATE TABLE HASNULLS (CHARCOL1 CHAR(1), CHARCOL2 CHAR(1));
You request an array of indicator variables for a COBOL program. DCLGEN might generate the following host variable declaration:
01 DCLHASNULLS. 10 CHARCOL1 PIC X(1). 10 CHARCOL2 PIC X(1). 01 IHASNULLS PIC S9(4) USAGE COMP OCCURS 2 TIMES.
The default is NO, which does not generate an indicator variable array. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 13 ADDITIONAL OPTIONS Indicates whether to display the panel for additional DCLGEN options. The default is YES. If you specified YES in the ADDITIONAL OPTIONS field, the ADDITIONAL DCLGEN OPTIONS panel is displayed, as shown in Figure 24.
DSNEDP02 ===> ADDITIONAL DCLGEN OPTIONS SSID: DSN
Enter options as desired: 1 RIGHT MARGIN .... ===> 72 2 FOR BIT DATA .... ===> NO
(Enter 72 or 80) (Enter YES to declare SQL variables for FOR BIT DATA columns) HELP for more information
END to exit
Fill in the ADDITIONAL DCLGEN OPTIONS panel as follows: 1 RIGHT MARGIN Specifies the break point for statement tokens that must be wrapped to one or more subsequent records. You can specify column 72 or column 80. The default is 72. 2 FOR BIT DATA Specifies whether DCLGEN is to generate a DECLARE VARIABLE statement of SQL variables for columns that are declared as FOR BIT DATA. This statement is required in DB2 applications that meet all of the following criteria: v are written in COBOL v have host variables for FOR BIT DATA columns v are prepared using the SQLCCSID option of the integrated DB2 coprocessor. The choices are YES and NO. The default is NO. If the table or view does not have FOR BIT DATA columns, DCLGEN does not generate this statement. DCLGEN generates a table or column name in the DECLARE statement as a non-delimited identifier unless at least one of the following conditions is true: v The name contains special characters and is not a DBCS string. v The name is a DBCS string, and you have requested delimited DBCS names.
Chapter 8. Generating declarations for your tables using DCLGEN
135
If you are using an SQL reserved word as an identifier, you must edit the DCLGEN output in order to add the appropriate SQL delimiters. DCLGEN produces output that is intended to meet the needs of most users, but occasionally, you will need to edit the DCLGEN output to work in your specific case. For example, DCLGEN is unable to determine whether a column that is defined as NOT NULL also contains the DEFAULT clause, so you must edit the DCLGEN output to add the DEFAULT clause to the appropriate column definitions.
In this example, DECEMP is a name of a member of a partitioned data set that contains the table declaration and a corresponding COBOL record description of the table DSN8810.EMP. (A COBOL record description is a two-level host structure that corresponds to the columns of a tables row. For information on host structures, see Chapter 9, Embedding SQL statements in host languages, on page 143.) To get a current description of the table, use DCLGEN to generate the tables declaration and store it as member DECEMP in a library (usually a partitioned data set) just before you precompile the program.
COBOL PIC S9(4) USAGE COMP PIC S9(9) USAGE COMP PIC S9(p-s)V9(s) USAGE COMP-3
PL/I BIN FIXED(15) BIN FIXED(31) DEC FIXED(p,s) If p>15, the PL/I compiler must support this precision, or a warning is generated.
float
USAGE COMP-1
BIN FLOAT(n)
136
Table 10. Declarations generated by DCLGEN (continued) SQL data type1 DOUBLE PRECISION, DOUBLE, or FLOAT(n) CHAR(1) CHAR(n) VARCHAR(n) C double COBOL USAGE COMP-2 PL/I BIN FLOAT(n)
char char var [n+1] struct {short int var_len; char var_data[n]; } var; SQL TYPE IS CLOB_LOCATOR sqldbchar sqldbchar var[n+1];
PIC X(1) PIC X(n) 10 var. 49 var_LEN PIC 9(4) USAGE COMP. 49 var_TEXT PIC X(n). USAGE SQL TYPE IS CLOB-LOCATOR PIC G(1) PIC G(n) USAGE DISPLAY-1.4 or PIC N(n).4 10 var. 49 var_LEN PIC 9(4) USAGE COMP. 49 var_TEXT PIC G(n) USAGE DISPLAY-1.4 or 10 var. 49 var_LEN PIC 9(4) USAGE COMP. 49 var_TEXT PIC N(n).4
CLOB(n)3
| |
GRAPHIC(n) VAR
SQL TYPE IS DBCLOB_LOCATOR USAGE SQL TYPE IS DBCLOB-LOCATOR SQL TYPE IS BLOB_LOCATOR char var[11]5 char var[9]6 char var[27] SQL TYPE IS ROWID USAGE SQL TYPE IS BLOB-LOCATOR PIC X(10)5 PIC X(8)6 PIC X(26) USAGE SQL TYPE IS ROWID
SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BLOB_LOCATOR CHAR(10)5 CHAR(8)6 CHAR(26) SQL TYPE IS ROWID
1. For a distinct type, DCLGEN generates the host language equivalent of the source data type. 2. If your C compiler does not support the decimal data type, edit your DCLGEN output, and replace the decimal data declarations with declarations of type double. 3. For a BLOB, CLOB, or DBCLOB data type, DCLGEN generates a LOB locator. 4. DCLGEN chooses the format based on the character you specify as the DBCS symbol on the COBOL Defaults panel. 5. This declaration is used unless a date installation exit routine exists for formatting dates, in which case the length is that specified for the LOCAL DATE LENGTH installation option. 6. This declaration is used unless a time installation exit routine exists for formatting times, in which case the length is that specified for the LOCAL TIME LENGTH installation option.
For more details about the DCLGEN subcommand, see Part 3 of DB2 Command Reference.
Chapter 8. Generating declarations for your tables using DCLGEN
137
Change defaults as desired: 1 2 3 4 5 6 7 8 9 10 11 DB2 NAME ............. ===> DSN (Subsystem identifier) DB2 CONNECTION RETRIES ===> 0 (How many retries for DB2 connection) APPLICATION LANGUAGE ===> IBMCOB (ASM, C, CPP, IBMCOB, FORTRAN, PLI) LINES/PAGE OF LISTING ===> 80 (A number from 5 to 999) MESSAGE LEVEL ........ ===> I (Information, Warning, Error, Severe) SQL STRING DELIMITER ===> DEFAULT (DEFAULT, or ") DECIMAL POINT ........ ===> . (. or ,) STOP IF RETURN CODE >= ===> 8 (Lowest terminating return code) NUMBER OF ROWS ===> 20 (For ISPF Tables) CHANGE HELP BOOK NAMES?===> NO (YES to change HELP data set names) DB2I JOB STATEMENT: (Optional if your site has a SUBMIT exit) ===> //USRT001A JOB (ACCOUNT),NAME ===> //* ===> //* ===> //* END to cancel HELP for more information
The COBOL Defaults panel is then displayed, as shown in Figure 26. Fill in the COBOL Defaults panel as necessary. Press Enter to save the new defaults, if any, and return to the DB2I Primary Option menu.
DSNEOP02 COMMAND ===>_ Change defaults as desired: 1 2 COBOL STRING DELIMITER ===> DBCS SYMBOL FOR DCLGEN ===> (DEFAULT, or ") (G/N - Character in PIC clause) COBOL DEFAULTS
Figure 26. The COBOL defaults panel. Shown only if the field APPLICATION LANGUAGE on the DB2I Defaults panel is IBMCOB.
138
Enter table name for which declarations are required: 1 SOURCE TABLE NAME ===> DSN8810.VPHONE 2 TABLE OWNER ..... ===>
3 AT LOCATION ..... ===> (Optional) Enter destination data set: (Can be sequential or partitioned) 4 DATA SET NAME ... ===> TEMP(VPHONEC) 5 DATA SET PASSWORD ===> (If password protected) Enter options as desired: 6 ACTION .......... ===> ADD (ADD new or REPLACE old declaration) 7 COLUMN LABEL .... ===> NO (Enter YES for column label) 8 STRUCTURE NAME .. ===> (Optional) 9 FIELD NAME PREFIX ===> (Optional) 10 DELIMIT DBCS .... ===> YES (Enter YES to delimit DBCS identifiers) 11 COLUMN SUFFIX ... ===> NO (Enter YES to append column name) 12 INDICATOR VARS .. ===> NO (Enter YES for indicator variables) 13 ADDITIONAL OPTIONS===> NO (Enter YES to change additional options) PRESS: ENTER to process END to exit HELP for more information DSNEDP01
Figure 27. DCLGEN panelselecting source table and destination data set
If the operation succeeds, a message is displayed at the top of your screen, as shown in Figure 28.
DSNE905I EXECUTION COMPLETE, MEMBER VPHONEC ADDED ***
DB2 again displays the DCLGEN screen, as shown in Figure 29 on page 140. Press Enter to return to the DB2I Primary Option menu.
139
# # # # # # # # # # # # # # # # # # # # # # # # # # #
DSNEDP01 ===>
DCLGEN
SSID: DSN
Enter table name for which declarations are required: 1 SOURCE TABLE NAME ===> DSN8810.VPHONE 2 TABLE OWNER ..... ===>
3 AT LOCATION ..... ===> (Optional) Enter destination data set: (Can be sequential or partitioned) 4 DATA SET NAME ... ===> TEMP(VPHONEC) 5 DATA SET PASSWORD ===> (If password protected) Enter options as desired: 6 ACTION .......... ===> ADD (ADD new or REPLACE old declaration) 7 COLUMN LABEL .... ===> NO (Enter YES for column label) 8 STRUCTURE NAME .. ===> (Optional) 9 FIELD NAME PREFIX ===> (Optional) 10 DELIMIT DBCS .... ===> YES (Enter YES to delimit DBCS identifiers) 11 COLUMN SUFFIX ... ===> NO (Enter YES to append column name) 12 INDICATOR VARS .. ===> NO (Enter YES for indicator variables) 13 ADDITIONAL OPTIONS===> NO (Enter YES to change additional options) PRESS: ENTER to process END to exit HELP for more information DSNEDP01
140
***** DCLGEN TABLE(DSN8810.VPHONE) *** ***** LIBRARY(SYSADM.TEMP.COBOL(VPHONEC)) *** ***** QUOTE *** ***** ... IS THE DCLGEN COMMAND THAT MADE THE FOLLOWING STATEMENTS *** EXEC SQL DECLARE DSN8810.VPHONE TABLE ( LASTNAME VARCHAR(15) NOT NULL, FIRSTNAME VARCHAR(12) NOT NULL, MIDDLEINITIAL CHAR(1) NOT NULL, PHONENUMBER VARCHAR(4) NOT NULL, EMPLOYEENUMBER CHAR(6) NOT NULL, DEPTNUMBER CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL ) END-EXEC. ***** COBOL DECLARATION FOR TABLE DSN8810.VPHONE ****** 01 DCLVPHONE. 10 LASTNAME. 49 LASTNAME-LEN PIC S9(4) USAGE COMP. 49 LASTNAME-TEXT PIC X(15). 10 FIRSTNAME. 49 FIRSTNAME-LEN PIC S9(4) USAGE COMP. 49 FIRSTNAME-TEXT PIC X(12). 10 MIDDLEINITIAL PIC X(1). 10 PHONENUMBER. 49 PHONENUMBER-LEN PIC S9(4) USAGE COMP. 49 PHONENUMBER-TEXT PIC X(4). 10 EMPLOYEENUMBER PIC X(6). 10 DEPTNUMBER PIC X(3). 10 DEPTNAME. 49 DEPTNAME-LEN PIC S9(4) USAGE COMP. 49 DEPTNAME-TEXT PIC X(36). ***** THE NUMBER OF COLUMNS DESCRIBED BY THIS DECLARATION IS 7 ****** Figure 30. DCLGEN results displayed in edit mode
141
142
143
Assembler
If your program is reentrant, you must include the SQLCA within a unique data area that is acquired for your task (a DSECT). For example, at the beginning of your program, specify:
PROGAREA DSECT EXEC SQL INCLUDE SQLCA
As an alternative, you can create a separate storage area for the SQLCA and provide addressability to that area. See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE statement and Appendix D of DB2 SQL Reference for a complete description of SQLCA fields.
You must place SQLDA declarations before the first SQL statement that references the data descriptor, unless you use the precompiler option TWOPASS. See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE statement and Appendix E of DB2 SQL Reference for a complete description of SQLDA fields.
144
Assembler
| | | | | |
Multiple-row FETCH statements: You can use only the FETCH ... USING DESCRIPTOR form of the multiple-row FETCH statement in an assembler program. The DB2 precompiler does not recognize declarations of host variable arrays for an assembler program. Comments: You cannot include assembler comments in SQL statements. However, you can include SQL comments in any embedded SQL statement. Continuation for SQL statements: The line continuation rules for SQL statements are the same as those for assembler statements, except that you must specify EXEC SQL within one line. Any part of the statement that does not fit on one line can appear on subsequent lines, beginning at the continuation margin (column 16, the default). Every line of the statement, except the last, must have a continuation character (a non-blank character) immediately after the right margin in column 72. Declaring tables and views: Your assembler program should include a DECLARE statement to describe each table and view the program accesses. Including code: To include SQL statements or assembler host variable declaration statements from a member of a partitioned data set, place the following SQL statement in the source code where you want to include the statements:
EXEC SQL INCLUDE member-name
You cannot nest SQL INCLUDE statements. Margins: The precompiler option MARGINS allows you to set a left margin, a right margin, and a continuation margin. The default values for these margins are columns 1, 71, and 16, respectively. If EXEC SQL starts before the specified left margin, the DB2 precompiler does not recognize the SQL statement. If you use the default margins, you can place an SQL statement anywhere between columns 2 and 71. Names: You can use any valid assembler name for a host variable. However, do not use external entry names or access plan names that begin with DSN or host variable names that begin with SQL. These names are reserved for DB2. The first character of a host variable that is used in embedded SQL cannot be an underscore. However, you can use an underscore as the first character in a symbol that is not used in embedded SQL.
145
Assembler
Statement labels: You can prefix an SQL statement with a label. The first line of an SQL statement can use a label beginning in the left margin (column 1). If you do not use a label, leave column 1 blank. WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER statement must be a label in the assembler source code and must be within the scope of the SQL statements that WHENEVER affects. Special assembler considerations: The following considerations apply to programs written in assembler: v To allow for reentrant programs, the precompiler puts all the variables and structures it generates within a DSECT called SQLDSECT, and it generates an assembler symbol called SQLDLEN. SQLDLEN contains the length of the DSECT. Your program must allocate an area of the size indicated by SQLDLEN, initialize it, and provide addressability to it as the DSECT SQLDSECT. The precompiler does not generate code to allocate the storage for SQLDSECT; the application program must allocate the storage.
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CICS An example of code to support reentrant programs, running under CICS, follows:
DFHEISTG DSECT DFHEISTG EXEC SQL INCLUDE SQLCA * DS 0F SQDWSREG EQU R7 SQDWSTOR DS (SQLDLEN)C RESERVE STORAGE TO BE USED FOR SQLDSECT . . . XXPROGRM DFHEIENT CODEREG=R12,EIBREG=R11,DATAREG=R13 * * * SQL WORKING STORAGE LA SQDWSREG,SQDWSTOR GET ADDRESS OF SQLDSECT USING SQLDSECT,SQDWSREG AND TELL ASSEMBLER ABOUT IT *
In this example, the actual storage allocation is done by the DFHEIENT macro.
146
Assembler
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # TSO The sample program in prefix.SDSNSAMP(DSNTIAD) contains an example of how to acquire storage for the SQLDSECT in a program that runs in a TSO environment. The following example code contains pieces from prefix.SDSNSAMP(DSNTIAD) with explanations in the comments.
DSNTIAD CSECT SAVE LR USING LR (14,12) R12,R15 DSNTIAD,R12 R7,R1 CONTROL SECTION NAME ANY SAVE SEQUENCE CODE ADDRESSABILITY TELL THE ASSEMBLER SAVE THE PARM POINTER
* * Allocate storage of size PRGSIZ1+SQLDSIZ, where: * - PRGSIZ1 is the size of the DSNTIAD program area * - SQLDSIZ is the size of the SQLDSECT, and declared * when the DB2 precompiler includes the SQLDSECT * L R6,PRGSIZ1 GET SPACE FOR USER PROGRAM A R6,SQLDSIZ GET SPACE FOR SQLDSECT GETMAIN R,LV=(6) GET STORAGE FOR PROGRAM VARIABLES LR R10,R1 POINT TO IT * * Initialize the storage * LR R2,R10 POINT TO THE FIELD LR R3,R6 GET ITS LENGTH SR R4,R4 CLEAR THE INPUT ADDRESS SR R5,R5 CLEAR THE INPUT LENGTH MVCL R2,R4 CLEAR OUT THE FIELD * * Map the storage for DSNTIAD program area * ST R13,FOUR(R10) CHAIN THE SAVEAREA PTRS ST R10,EIGHT(R13) CHAIN SAVEAREA FORWARD LR R13,R10 POINT TO THE SAVEAREA USING PRGAREA1,R13 SET ADDRESSABILITY * * Map the storage for the SQLDSECT * LR R9,R13 POINT TO THE PROGAREA A R9,PRGSIZ1 THEN PAST TO THE SQLDSECT USING SQLDSECT,R9 SET ADDRESSABILITY ... LTORG ********************************************************************** * * * DECLARE VARIABLES, WORK AREAS * * * ********************************************************************** PRGAREA1 DSECT WORKING STORAGE FOR THE PROGRAM ... DS 0D PRGSIZE1 EQU *-PRGAREA1 DYNAMIC WORKAREA SIZE ... DSNTIAD CSECT RETURN TO CSECT FOR CONSTANT PRGSIZ1 DC A(PRGSIZE1) SIZE OF PROGRAM WORKING STORAGE CA DSECT EXEC SQL INCLUDE SQLCA ...
v DB2 does not process set symbols in SQL statements. v Generated code can include more than two continuations per comment.
147
Assembler
v Generated code uses literal constants (for example, =F-84), so an LTORG statement might be necessary. v Generated code uses registers 0, 1, 14, and 15. Register 13 points to a save area that the called program uses. Register 15 does not contain a return code after a call that is generated by an SQL statement.
CICS A CICS application program uses the DFHEIENT macro to generate the entry point code. When using this macro, consider the following: If you use the default DATAREG in the DFHEIENT macro, register 13 points to the save area. If you use any other DATAREG in the DFHEIENT macro, you must provide addressability to a save area. For example, to use SAVED, you can code instructions to save, load, and restore register 13 around each SQL statement as in the following example.
ST 13,SAVER13 LA 13,SAVED EXEC SQL . . . L 13,SAVER13 SAVE REGISTER 13 POINT TO SAVE AREA RESTORE REGISTER 13
v If you have an addressability error in precompiler-generated code because of input or output host variables in an SQL statement, check to make sure that you have enough base registers. v Do not put CICS translator options in the assembly source code. Instead, pass the options to the translator by using the PARM field.
148
Assembler
Numeric host variables: Figure 31 shows the syntax for declarations of numeric host variables. The numeric value specifies the scale of the packed decimal variable. If value does not include a decimal point, the scale is 0.
variable-name
DC DS
H 1 F L4 P Ln E L4 EH L4 EB L4 D L8 DH L8 DB L8 value L2
| |
For floating-point data types (E, EH, EB, D, DH, and DB), DB2 uses the FLOAT precompiler option to determine whether the host variable is in IEEE binary floating-point or System/390 hexadecimal floating-point format. If the precompiler option is FLOAT(S390), you need to define your floating-point host variables as E, EH, D, or DH. If the precompiler option is FLOAT(IEEE), you need to define your floating-point host variables as EB or DB. DB2 converts all floating-point input data to System/390 hexadecimal floating-point before storing it. Character host variables: The three valid forms for character host variables are: v Fixed-length strings v Varying-length strings v CLOBs The following figures show the syntax for forms other than CLOBs. See Figure 38 on page 151 for the syntax of CLOBs. Figure 32 shows the syntax for declarations of fixed-length character strings.
variable-name
DC DS
C 1 Ln
Figure 33 on page 150 shows the syntax for declarations of varying-length character strings.
149
Assembler
variable-name
DC DS
H 1 L2
, 1
CLn
Graphic host variables: The three valid forms for graphic host variables are: v Fixed-length strings v Varying-length strings v DBCLOBs The following figures show the syntax for forms other than DBCLOBs. See Figure 38 on page 151 for the syntax of DBCLOBs. In the syntax diagrams, value denotes one or more DBCS characters, and the symbols < and > represent shift-out and shift-in characters. Figure 34 shows the syntax for declarations of fixed-length graphic strings.
variable-name
DC DS
G Ln <value> Ln<value>
variable-name
DS DC
H L2 m
, GLn <value>
Result set locators: Figure 36 shows the syntax for declarations of result set locators. See Chapter 25, Using stored procedures for client/server processing, on page 629 for a discussion of how to use these host variables.
variable-name
DC DS
F 1 L4
Table Locators: Figure 37 shows the syntax for declarations of table locators. See Accessing transition tables in a user-defined function or stored procedure on page 345 for a discussion of how to use these host variables.
150
Assembler
LOB variables and locators: Figure 38 shows the syntax for declarations of BLOB, CLOB, and DBCLOB host variables and locators. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR
length K M G
If you specify the length of the LOB in terms of KB, MB, or GB, you must leave no spaces between the length and K, M, or G. ROWIDs: Figure 39 shows the syntax for declarations of ROWID host variables. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
p in byte 1, s in DECIMAL(p,s) byte 2 See the description for DECIMAL(p,s) in Table 12 on page 153. 4 REAL or FLOAT (n) 1<=n<=21 DOUBLE PRECISION, or FLOAT (n) 22<=n<=53 CHAR(n)
480
480
452
151
Assembler
Table 11. SQL data types the precompiler uses for assembler declarations (continued) Assembler data type DS HL2,CLn 1<=n<=255 DS HL2,CLn n>255 DS GLm 2<=m<=2541 DS HL2,GLm 2<=m<=2541 DS HL2,GLm m>2541 DS FL4 SQL TYPE IS TABLE LIKE table-name AS LOCATOR SQL TYPE IS BLOB_LOCATOR SQL TYPE IS CLOB_LOCATOR SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BLOB(n) 1n2147483647 SQL TYPE IS CLOB(n) 1n2147483647 SQL TYPE IS DBCLOB(n) 1n10737418232 SQL TYPE IS ROWID Notes: 1. m is the number of bytes. 2. n is the number of double-byte characters. 3. This data type cannot be used as a column type. SQLTYPE of host variable 448 456 468 464 472 972 976 SQLLEN of host variable n n n n n 4 4 SQL data type VARCHAR(n) VARCHAR(n) GRAPHIC(n)2 VARGRAPHIC(n)2 VARGRAPHIC(n)2 Result set locator2 Table locator2
4 4 4 n
408
CLOB(n)
412
DBCLOB(n)2
904
40
ROWID
Table 12 on page 153 helps you define host variables that receive output from the database. You can use Table 12 on page 153 to determine the assembler data type that is equivalent to a given SQL data type. For example, if you retrieve TIMESTAMP data, you can use the table to define a suitable host variable in the program that receives the data value. Table 12 on page 153 shows direct conversions between DB2 data types and host data types. However, a number of DB2 data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 does conversions between those compatible data types. See Table 1 on page 5 for information about compatible data types.
152
Assembler
Table 12. SQL data types mapped to typical assembler declarations SQL data type SMALLINT INTEGER DECIMAL(p,s) or NUMERIC(p,s) Assembler equivalent DS HL2 DS F DS Pvalue DS PLnvalue DS PLn p is precision; s is scale. 1<=p<=31 and 0<=s<=p. 1<=n<=16. value is a literal value that includes a decimal point. You must use Ln, value, or both. Using only value is recommended. Precision: If you use Ln, it is 2n-1; otherwise, it is the number of digits in value. Scale: If you use value, it is the number of digits to the right of the decimal point; otherwise, it is 0. For efficient use of indexes: Use value. If p is even, do not use Ln and be sure the precision of value is p and the scale of value is s. If p is odd, you can use Ln (although it is not advised), but you must choose n so that 2n-1=p, and value so that the scale is s. Include a decimal point in value, even when the scale of value is 0. REAL or FLOAT(n) DS EL4 DS EHL4 DS EBL41 DS DL8 DS DHL8 DS DBL81 DS CLn DS HL2,CLn DS GLm DS HL2,GLx DS HL2m,GLx<value> m is expressed in bytes. n is the number of double-byte characters. 1<=n<=127 x and m are expressed in bytes. n is the number of double-byte characters. < and > represent shift-out and shift-in characters. If you are using a date exit routine, n is determined by that routine; otherwise, n must be at least 10. If you are using a time exit routine, n is determined by that routine. Otherwise, n must be at least 6; to include seconds, n must be at least 8. n must be at least 19. To include microseconds, n must be 26; if n is less than 26, truncation occurs on the microseconds part. Use this data type only to receive result sets. Do not use this data type as a column type. 1<=n<=21 Notes
22<=n<=53
1<=n<=255
DATE
DS CLn
TIME
DS CLn
TIMESTAMP
DS CLn
DS F
153
Assembler
Table 12. SQL data types mapped to typical assembler declarations (continued) SQL data type Table locator Assembler equivalent SQL TYPE IS TABLE LIKE table-name AS LOCATOR SQL TYPE IS BLOB_LOCATOR SQL TYPE IS CLOB_LOCATOR SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BLOB(n) SQL TYPE IS CLOB(n) SQL TYPE IS DBCLOB(n) SQL TYPE IS ROWID Notes Use this data type only in a user-defined function or stored procedure to receive rows of a transition table. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. 1n2147483647 1n2147483647 n is the number of double-byte characters. 1n1073741823
BLOB locator
CLOB locator
DBCLOB locator
1. IEEE floating-point host variables are not supported in user-defined functions and stored procedures.
| |
154
Assembler
Table locator LOB locators Accessing transition tables in a user-defined function or stored procedure on page 345 Chapter 14, Programming for large objects, on page 297
Overflow: Be careful of overflow. For example, suppose you retrieve an INTEGER column value into a DS H host variable, and the column value is larger than 32767. You get an overflow warning or an error, depending on whether you provided an indicator variable. Truncation: Be careful of truncation. For example, if you retrieve an 80-character CHAR column value into a host variable declared as DS CL70, the rightmost ten characters of the retrieved string are truncated. If you retrieve a floating-point or decimal column value into a host variable declared as DS F, it removes any fractional part of the value.
155
Assembler
Use a VALUES INTO statement to assign a GRAPHIC or VARGRAPHIC function parameter to a DBCLOB locator host variable. However, you cannot use a FETCH statement to assign a value in a GRAPHIC or VARGRAPHIC column to a DBCLOB locator host variable. v Datetime data types are compatible with character host variables. A DATE, TIME, or TIMESTAMP column is compatible with a fixed-length or varying-length assembler character host variable. v A BLOB column or a BLOB locator is compatible only with a BLOB host variable. v The ROWID column is compatible only with a ROWID host variable. v A host variable is compatible with a distinct type if the host variable type is compatible with the source type of the distinct type. For information about assigning and comparing distinct types, see Chapter 16, Creating and using distinct types, on page 367. When necessary, DB2 automatically converts a fixed-length string to a varying-length string, or a varying-length string to a fixed-length string.
INDICATOR VARIABLE FOR DAY INDICATOR VARIABLE FOR BGN INDICATOR VARIABLE FOR END
Figure 40 on page 157 shows the syntax for declarations of indicator host variables.
156
Assembler
variable-name
DC DS
H 1 L2
DSNTIAR syntax CALL DSNTIAR,(sqlca, message, lrecl),MF=(E,PARM) The DSNTIAR parameters have the following meanings: sqlca An SQL communication area. message An output area, defined as a varying-length string, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240. The output lines of text, each line being the length specified in lrecl, are put into this area. For example, you could specify the format of the output area as: # # # # # # # # # # # # # # # # # # # # # #
LINES LRECL EQU EQU 10 132
AL4(LRECL) H,CL(LINES*LRECL) MESSAGE AL2(LINES*LRECL) CL(LRECL) text line 1 CL(LRECL) text line 2
CL(LRECL)
text line n
CALL DSNTIAR,(SQLCA,MESSAGE,MSGLRECL),MF=(E,PARM)
where MESSAGE is the name of the message output area, LINES is the number of lines in the message output area, and LRECL is the length of each line.
Chapter 9. Embedding SQL statements in host languages
157
Assembler
lrecl A fullword containing the logical record length of output messages, between 72 and 240. The expression MF=(E,PARM) is an z/OS macro parameter that indicates dynamic execution. PARM is the name of a data area that contains a list of pointers to the call parameters of DSNTIAR. See Appendix B, Sample applications, on page 1013 for instructions on how to access and print the source code for the sample program.
CICS If your CICS application requires CICS storage handling, you must use the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC,(eib,commarea,sqlca,msg,lrecl),MF=(E,PARM)
DSNTIAC has extra parameters, which you must use for calls to routines that use CICS commands. eib commarea EXEC interface block communication area
For more information on these parameters, see the appropriate application programming guide for CICS. The remaining parameter descriptions are the same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA in the same way. You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you must also define them in the CSD. For an example of CSD entry generation statements for use with DSNTIAC, see member DSN8FRDO in the data set prefix.SDSNSAMP. The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and link-edits DSNTIAC, are also in the data set prefix.SDSNSAMP.
158
C
char SQLSTATE[6];
Alternatively, you can include an SQLCA, which contains the SQLCODE and SQLSTATE variables. DB2 sets the SQLCODE and SQLSTATE values after each SQL statement executes. An application can check these values to determine whether the last SQL statement was successful. All SQL statements in the program must be within the scope of the declaration of the SQLCODE and SQLSTATE variables. Whether you define the SQLCODE or SQLSTATE variable or an SQLCA in your program depends on whether you specify the precompiler option STDSQL(YES) to conform to SQL standard, or STDSQL(NO) to conform to DB2 rules.
A standard declaration includes both a structure definition and a static data area named sqlca. See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE statement and Appendix D of DB2 SQL Reference for a complete description of SQLCA fields.
159
C
A standard declaration includes only a structure definition with the name sqlda. See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE statement and Appendix E of DB2 SQL Reference for a complete description of SQLDA fields. You must place SQLDA declarations before the first SQL statement that references the data descriptor, unless you use the precompiler option TWOPASS. You can place an SQLDA declaration wherever C allows a structure definition. Normal C scoping rules apply.
Comments: You can include C comments (/* ... */) within SQL statements wherever you can use a blank, except between the keywords EXEC and SQL. You can use single-line comments (starting with //) in C language statements, but not in embedded SQL. You cannot nest comments. To include DBCS characters in comments, you must delimit the characters by a shift-out and shift-in control character; the first shift-in character in the DBCS string signals the end of the DBCS string. You can include SQL comments in any embedded SQL statement. Continuation for SQL statements: You can use a backslash to continue a character-string constant or delimited identifier on the following line. Declaring tables and views: Your C program should use the DECLARE TABLE statement to describe each table and view the program accesses. You can use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE statements. For more information, see Chapter 8, Generating declarations for your tables using DCLGEN, on page 131. Including code: To include SQL statements or C host variable declarations from a member of a partitioned data set, add the following SQL statement to the source code where you want to include the statements:
EXEC SQL INCLUDE member-name;
| |
160
C
You cannot nest SQL INCLUDE statements. Do not use C #include statements to include SQL statements or C host variable declarations. Margins: Code SQL statements in columns 1 through 72, unless you specify other margins to the DB2 precompiler. If EXEC SQL is not within the specified margins, the DB2 precompiler does not recognize the SQL statement. Names: You can use any valid C name for a host variable, subject to the following restrictions: v Do not use DBCS characters. v Do not use external entry names or access plan names that begin with DSN, and do not use host variable names that begin with SQL (in any combination of uppercase or lowercase letters). These names are reserved for DB2. Nulls and NULs: C and SQL differ in the way they use the word null. The C language has a null character (NUL), a null pointer (NULL), and a null statement (just a semicolon). The C NUL is a single character that compares equal to 0. The C NULL is a special reserved pointer value that does not point to any valid data object. The SQL null value is a special value that is distinct from all non-null values and denotes the absence of a (nonnull) value. In this chapter, NUL is the null character in C and NULL is the SQL null value. Sequence numbers: The source statements that the DB2 precompiler generates do not include sequence numbers. Statement labels: You can precede SQL statements with a label. Trigraph characters: Some characters from the C character set are not available on all keyboards. You can enter these characters into a C source program using a sequence of three characters called a trigraph. The trigraph characters that DB2 supports are the same as those that the C compiler supports. WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER statement must be within the scope of any SQL statements that the statement WHENEVER affects. Special C considerations: v Using the C/370 multi-tasking facility, in which multiple tasks execute SQL statements, causes unpredictable results. v You must run the DB2 precompiler before running the C preprocessor. v The DB2 precompiler does not support C preprocessor directives. v If you use conditional compiler directives that contain C code, either place them after the first C token in your application program, or include them in the C program using the #include preprocessor directive. Refer to the appropriate C documentation for more information about C preprocessor directives. | | | | |
161
C
| | | | | | | | | | | | | Precede C statements that define the host variables and host variable arrays with the BEGIN DECLARE SECTION statement, and follow the C statements with the END DECLARE SECTION statement. You can have more than one host variable declaration section in your program. A colon (:) must precede all host variables and all host variable arrays in an SQL statement. The names of host variables and host variable arrays must be unique within the program, even if the variables and variable arrays are in different blocks, classes, or procedures. You can qualify the names with a structure name to make them unique. An SQL statement that uses a host variable or host variable array must be within the scope of the statement that declares that variable or array. You define host variable arrays for use with multiple-row FETCH and INSERT statements.
float double int short sqlint32 int long int long long decimal ( integer , integer
# |
, variable-name =expression ;
Character host variables: The four valid forms for character host variables are: v Single-character form v NUL-terminated character form v VARCHAR structured form v CLOBs The following figures show the syntax for forms other than CLOBs. See Figure 50 on page 168 for the syntax of CLOBs.
162
C
Figure 42 shows the syntax for declarations of single-character host variables. #
char auto extern static const volatile unsigned
, variable-name =expression ;
Figure 43 shows the syntax for declarations of NUL-terminated character host variables.
Notes: 1. On input, the string contained by the variable must be NUL-terminated. 2. On output, the string is NUL-terminated. 3. A NUL-terminated character host variable maps to a varying-length character string (except for the NUL). Figure 44 on page 164 shows the syntax for declarations of varying-length character host variables that use the VARCHAR structured form.
163
int struct auto extern static const volatile tag { short var-1 ;
Notes: 1. var-1 and var-2 must be simple variable references. You cannot use them as host variables. 2. You can use the struct tag to define other data areas that you cannot use as host variables. Example: The following examples show valid and invalid declarations of the VARCHAR structured form:
EXEC SQL BEGIN DECLARE SECTION; /* valid declaration of host variable VARCHAR vstring */ struct VARCHAR { short len; char s[10]; } vstring; /* invalid declaration of host variable VARCHAR wstring */ struct VARCHAR wstring;
Graphic host variables: The four valid forms for graphic host variables are: v Single-graphic form v NUL-terminated graphic form v VARGRAPHIC structured form. v DBCLOBs | You can use the C data type sqldbchar to define a host variable that inserts, updates, deletes, and selects data from GRAPHIC or VARGRAPHIC columns. The following figures show the syntax for forms other than DBCLOBs. See Figure 50 on page 168 for the syntax of DBCLOBs. Figure 45 on page 165 shows the syntax for declarations of single-graphic host variables.
164
| #
sqldbchar auto extern static const volatile
, variable-name =expression ;
The single-graphic form declares a fixed-length graphic string of length 1. You cannot use array notation in variable-name. Figure 46 shows the syntax for declarations of NUL-terminated graphic host variables. | #
sqldbchar auto extern static const volatile
Notes: 1. length must be a decimal integer constant greater than 1 and not greater than 16352. 2. On input, the string in variable-name must be NUL-terminated. 3. On output, the string is NUL-terminated. 4. The NUL-terminated graphic form does not accept single-byte characters into variable-name. Figure 47 on page 166 shows the syntax for declarations of graphic host variables that use the VARGRAPHIC structured form.
165
int struct auto extern static const volatile tag { short var-1 ;
| #
sqldbchar var-2 [ length ] ; }
Notes: 1. length must be a decimal integer constant greater than 1 and not greater than 16352. 2. var-1 must be less than or equal to length. 3. var-1 and var-2 must be simple variable references. You cannot use them as host variables. 4. You can use the struct tag to define other data areas that you cannot use as host variables. Example: The following examples show valid and invalid declarations of graphic host variables that use the VARGRAPHIC structured form:
EXEC SQL BEGIN DECLARE SECTION; /* valid declaration of host variable structured vgraph */ struct VARGRAPH { short len; sqldbchar d[10]; } vgraph; /* invalid declaration of host variable structured wgraph */ struct VARGRAPH wgraph;
Result set locators: Figure 48 on page 167 shows the syntax for declarations of result set locators. See Chapter 25, Using stored procedures for client/server processing, on page 629 for a discussion of how to use these host variables.
166
SQL TYPE IS RESULT_SET_LOCATOR VARYING auto extern static register const volatile
, variable-name = init-value ;
Table Locators: Figure 49 shows the syntax for declarations of table locators. See Accessing transition tables in a user-defined function or stored procedure on page 345 for a discussion of how to use these host variables.
SQL TYPE IS TABLE LIKE table-name AS LOCATOR auto extern static register const volatile
, variable-name init-value ;
LOB Variables and Locators: Figure 50 on page 168 shows the syntax for declarations of BLOB, CLOB, and DBCLOB host variables and locators. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
167
#
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR ( length K M G )
, variable-name init-value ;
ROWIDs: Figure 51 shows the syntax for declarations of ROWID host variables. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables. #
auto extern static register const volatile
| | | | | # # # # # | | |
168
| |
const volatile
unsigned
float double int long short int long long decimal ( integer , integer
| |
| | | | | | | | | | | | | | | | | | | | | | |
Note: 1. dimension must be an integer constant between 1 and 32767. Example: The following example shows a declaration of a numeric host variable array:
EXEC SQL BEGIN DECLARE SECTION; /* declaration of numeric host variable array */ long serial_num[10]; ... EXEC SQL END DECLARE SECTION;
Character host variable arrays: The three valid forms for character host variable arrays are: v NUL-terminated character form v VARCHAR structured form v CLOBs The following figures show the syntax for forms other than CLOBs. See Figure 57 on page 173 for the syntax of CLOBs. Figure 53 on page 170 shows the syntax for declarations of NUL-terminated character host variable arrays.
169
| |
| |
| | | | Figure 53. NUL-terminated character form | Notes: | 1. On input, the strings contained in the variable arrays must be NUL-terminated. | 2. On output, the strings are NUL-terminated. | 3. The strings in a NUL-terminated character host variable array map to | varying-length character strings (except for the NUL). | 4. dimension must be an integer constant between 1 and 32767. | | | | | |
auto extern static const volatile
Figure 54 shows the syntax for declarations of varying-length character host variable arrays that use the VARCHAR structured form.
| |
| |
| | | | Figure 54. VARCHAR structured form | Notes: | 1. var-1 must be a simple variable reference, and var-2 must be a variable array | reference. | 2. You can use the struct tag to define other data areas, which you cannot use as | host variable arrays. | 3. dimension must be an integer constant between 1 and 32767. |
170
C
| | | | | | | | | | | | | | | | | | | | | | Example: The following examples show valid and invalid declarations of VARCHAR host variable arrays:
EXEC SQL BEGIN DECLARE SECTION; /* valid declaration of VARCHAR host variable array */ struct VARCHAR { short len; char s[18]; } name[10]; /* invalid declaration of VARCHAR host variable array */ struct VARCHAR name[10];
Graphic host variable arrays: The two valid forms for graphic host variable arrays are: v NUL-terminated graphic form v VARGRAPHIC structured form. You can use the C data type sqldbchar to define a host variable array that inserts, updates, deletes, and selects data from GRAPHIC or VARGRAPHIC columns. Figure 55 shows the syntax for declarations of NUL-terminated graphic host variable arrays.
| |
| | | | | | | | | | | | | | | |
Notes: 1. length must be a decimal integer constant greater than 1 and not greater than 16352. 2. On input, the strings contained in the variable arrays must be NUL-terminated. 3. On output, the string is NUL-terminated. 4. The NUL-terminated graphic form does not accept single-byte characters into the variable array. 5. dimension must be an integer constant between 1 and 32767. Figure 56 on page 172 shows the syntax for declarations of graphic host variable arrays that use the VARGRAPHIC structured form.
171
| |
auto extern static const volatile
| |
| |
| | | | Figure 56. VARGRAPHIC structured form | | | | | | | | | | | | | | | | | | | | | | | | Notes: 1. length must be a decimal integer constant greater than 1 and not greater than 16352. 2. var-1 must be a simple variable reference, and var-2 must be a variable array reference. 3. You can use the struct tag to define other data areas, which you cannot use as host variable arrays. 4. dimension must be an integer constant between 1 and 32767. Example: The following examples show valid and invalid declarations of graphic host variable arrays that use the VARGRAPHIC structured form:
EXEC SQL BEGIN DECLARE SECTION; /* valid declaration of host variable array vgraph */ struct VARGRAPH { short len; sqldbchar d[10]; } vgraph[20]; /* invalid declaration of host variable array vgraph */ struct VARGRAPH vgraph[20];
LOB variable arrays and locators: Figure 57 on page 173 shows the syntax for declarations of BLOB, CLOB, and DBCLOB host variable arrays and locators. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use LOB variables.
172
| |
| |
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR
( length K M G
| |
| | | | | | | | | | | |
Note: 1. dimension must be an integer constant between 1 and 32767. ROWIDs: Figure 58 shows the syntax for declarations of ROWID variable arrays. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variable arrays.
, SQL TYPE IS ROWID auto extern static register const volatile variable-name [ dimension ] ;
| | | | | | |
173
C
struct {char c1[3]; struct {short len; char data[5]; }c2; char c3[2]; }target;
In this example, target is the name of a host structure consisting of the c1, c2, and c3 fields. c1 and c3 are character arrays, and c2 is the host variable equivalent to the SQL VARCHAR data type. The target host structure can be part of another host structure but must be the deepest level of the nested structure. Figure 59 shows the syntax for declarations of host structures.
|
float double int short sqlint32 int long int long long decimal ( integer , integer var-1 ; }
varchar structure vargraphic structure SQL TYPE IS ROWID LOB data type char var-2 unsigned [ length ] sqldbchar var-5 ; [ length ]
# |
variable-name =expression
Figure 60 on page 175 shows the syntax for VARCHAR structures that are used within declarations of host structures.
174
Figure 61 shows the syntax for VARGRAPHIC structures that are used within declarations of host structures. #
struct tag { signed short
Figure 62 shows the syntax for LOB data types that are used within declarations of host structures.
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR
( length K M G
DECIMAL(p,s)1
175
C
Table 13. SQL data types the precompiler uses for C declarations (continued) C data type float double Single-character form NUL-terminated character form VARCHAR structured form 1<=n<=255 VARCHAR structured form n>255 Single-graphic form NUL-terminated graphic form (sqldbchar) VARGRAPHIC structured form 1<=n<128 VARGRAPHIC structured form n>127 SQL TYPE IS RESULT_SET _LOCATOR SQL TYPE IS TABLE LIKE table-name AS LOCATOR SQL TYPE IS BLOB_LOCATOR SQL TYPE IS CLOB_LOCATOR SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BLOB(n) 1n2147483647 SQL TYPE IS CLOB(n) 1n2147483647 SQL TYPE IS DBCLOB(n) 1n1073741823 SQL TYPE IS ROWID SQLTYPE of host variable 480 480 452 460 448 456 SQLLEN of host variable 4 8 1 n n n SQL data type FLOAT (single precision) FLOAT (double precision) CHAR(1) VARCHAR (n-1) VARCHAR(n) VARCHAR(n)
468 400
1 n
464
VARGRAPHIC(n)
472
VARGRAPHIC(n)
972
976
Table locator2
4 4 4 n
408
CLOB(n)
412
DBCLOB(n)3
904
40
ROWID
176
C
Table 13. SQL data types the precompiler uses for C declarations (continued) C data type Notes: 1. p is the precision; in SQL terminology, this the total number of digits. In C, this is called the size. s is the scale; in SQL terminology, this is the number of digits to the right of the decimal point. In C, this is called the precision. SQLTYPE of host variable SQLLEN of host variable SQL data type
C++ does not support the decimal data type. 2. Do not use this data type as a column type. 3. n is the number of double-byte characters. 4. No exact equivalent. Use DECIMAL(19,0).
Table 14 helps you define host variables that receive output from the database. You can use the table to determine the C data type that is equivalent to a given SQL data type. For example, if you retrieve TIMESTAMP data, you can use the table to define a suitable host variable in the program that receives the data value. Table 14 shows direct conversions between DB2 data types and host data types. However, a number of DB2 data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 does conversions between those compatible data types. See Table 1 on page 5 for information about compatible data types.
Table 14. SQL data types mapped to typical C declarations SQL data type SMALLINT INTEGER DECIMAL(p,s) or NUMERIC(p,s) C data type short int long int decimal You can use the double data type if your C compiler does not have a decimal data type; however, double is not an exact equivalent. 1<=n<=21 22<=n<=53 Notes
If n>1, use NUL-terminated character form If data can contain character NULs (\0), use VARCHAR structured form. Allow at least n+1 to accommodate the NUL-terminator.
VARCHAR structured form GRAPHIC(1) GRAPHIC(n) single-graphic form no exact equivalent If n>1, use NUL-terminated graphic form. n is the number of double-byte characters.
177
C
Table 14. SQL data types mapped to typical C declarations (continued) SQL data type VARGRAPHIC(n) C data type NUL-terminated graphic form Notes If data can contain graphic NUL values (\0\0), use VARGRAPHIC structured form. Allow at least n+1 to accommodate the NUL-terminator. n is the number of double-byte characters. n is the number of double-byte characters. If you are using a date exit routine, that routine determines the length. Otherwise, allow at least 11 characters to accommodate the NUL-terminator. If you are using a date exit routine, that routine determines the length. Otherwise, allow at least 10 characters. If you are using a time exit routine, the length is determined by that routine. Otherwise, the length must be at least 7; to include seconds, the length must be at least 9 to accommodate the NUL-terminator. If you are using a time exit routine, the length is determined by that routine. Otherwise, the length must be at least 6; to include seconds, the length must be at least 8. The length must be at least 20. To include microseconds, the length must be 27. If the length is less than 27, truncation occurs on the microseconds part. The length must be at least 19. To include microseconds, the length must be 26. If the length is less than 26, truncation occurs on the microseconds part. Use this data type only for receiving result sets. Do not use this data type as a column type. Use this data type only in a user-defined function or stored procedure to receive rows of a transition table. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. 1n2147483647 1n2147483647
TIME
TIMESTAMP
Table locator
BLOB locator
CLOB locator
DBCLOB locator
BLOB(n) CLOB(n)
178
C
Table 14. SQL data types mapped to typical C declarations (continued) SQL data type DBCLOB(n ROWID C data type SQL TYPE IS DBCLOB(n) SQL TYPE IS ROWID Notes n is the number of double-byte characters. 1n1073741823
| | | |
Floating-point host variables: All floating-point data is stored in DB2 in System/390 hexadecimal floating-point format. However, your host variable data can be in System/390 hexadecimal floating-point format or IEEE binary floating-point format. DB2 uses the FLOAT precompiler option to determine whether your floating-point host variables are in IEEE binary floating-point or System/390 hexadecimal floating-point format. DB2 does no checking to determine whether the contents of a host variable match the precompiler option. Therefore, you need to ensure that your floating-point data format matches the precompiler option. Graphic host variables in user-defined function: The SQLUDF file, which is in data set DSN810.SDSNC.H, contains many data declarations for C language user-defined functions. SQLUDF contains the typedef sqldbchar, which you should use instead of wchar_t. Using sqldbchar lets you manipulate DBCS and Unicode UTF-16 data in the same format in which it is stored in DB2. Using sqldbchar also makes applications easier to port to other DB2 platforms. Special Purpose C Data Types: The locator data types are C data types and SQL data types. You cannot use locators as column types. For information about how to use these data types, see the following sections:
179
C
Result set locator Chapter 25, Using stored procedures for client/server processing, on page 629 Table locator LOB locators Accessing transition tables in a user-defined function or stored procedure on page 345 Chapter 14, Programming for large objects, on page 297
| | | | | | | # | | | |
String host variables: If you assign a string of length n to a NUL-terminated variable with a length that is: v Less than or equal to n, DB2 inserts the characters into the host variable up to a length of (n-1), and appends a NUL at the end of the string. DB2 sets SQLWARN[1] to W and any indicator variable you provide to the original length of the source string. v Equal to n+1, DB2 inserts the characters into the host variable and appends a NUL at the end of the string. v Greater than n+1, the rules depend on whether the source string is a value of a fixed-length string column or a varying-length string column. If the source is a fixed-length string, DB2 pads it with blanks on assignment to the NUL-terminated variable depending on whether the precompiler option PADNTSTR is specified. If the source is a varying-length string, DB2 assigns it to the first n bytes of the variable and appends a NUL at the end of the string. For information about host language precompiler options, see Table 64 on page 484. PREPARE or DESCRIBE statements: You cannot use a host variable that is of the NUL-terminated form in either a PREPARE or DESCRIBE statement when you use the DB2 precompiler. However, if you use the DB2 coprocessor for either C or C++, you can use host variables of the NUL-terminated form in PREPARE, DESCRIBE, and EXECUTE IMMEDIATE statements. L-literals: DB2 tolerates L-literals in C application programs. DB2 allows properly formed L-literals, although it does not check for all the restrictions that the C compiler imposes on the L-literal. You can use DB2 graphic string constants in SQL statements to work with the L-literal. Do not use L-literals in SQL statements. Overflow: Be careful of overflow. For example, suppose you retrieve an INTEGER column value into a short integer host variable and the column value is larger than 32767. You get an overflow warning or an error, depending on whether you provide an indicator variable. Truncation: Be careful of truncation. Ensure that the host variable you declare can contain the data and a NUL terminator, if needed. Retrieving a floating-point or decimal column value into a long integer host variable removes any fractional part of the value.
180
C
In C, a real (floating-point) constant can have a suffix of f or F to show a data type of float or a suffix of l or L to show a type of long double. A floating-point constant in an SQL statement must not use these suffixes. Integer constants: In C, you can provide integer constants in hexadecimal form if the first two characters are 0x or 0X. You cannot use this form in an SQL statement. In C, an integer constant can have a suffix of u or U to show that it is an unsigned integer. An integer constant can have a suffix of l or L to show a long integer. You cannot use these suffixes in SQL statements. Character and string constants: In C, character constants and string constants can use escape sequences. You cannot use the escape sequences in SQL statements. Apostrophes and quotes have different meanings in C and SQL. In C, you can use double quotes to delimit string constants, and apostrophes to delimit character constants. The following examples illustrate the use of quotes and apostrophes in C. Quotes
printf( "%d lines read. \n", num_lines);
Apostrophes
#define NUL \0
In SQL, you can use double quotes to delimit identifiers and apostrophes to delimit string constants. The following examples illustrate the use of apostrophes and quotes in SQL. Quotes
SELECT "COL#1" FROM TBL1;
Apostrophes
SELECT COL1 FROM TBL1 WHERE COL2 = BELL;
Character data in SQL is distinct from integer data. Character data in C is a subtype of integer data.
181
C
However, you cannot use a FETCH statement to assign a value in a CHAR or VARCHAR column to a CLOB locator host variable. v Graphic data types are compatible with each other. A GRAPHIC, VARGRAPHIC, or DBCLOB column is compatible with a single character, NUL-terminated, or VARGRAPHIC structured form of a C graphic host variable. v Graphic data types are partially compatible with DBCLOB locators. You can perform the following assignments: Assign a value in a DBCLOB locator to a GRAPHIC or VARGRAPHIC column Use a SELECT INTO statement to assign a GRAPHIC or VARGRAPHIC column to a DBCLOB locator host variable. Assign a GRAPHIC or VARGRAPHIC output parameter from a user-defined function or stored procedure to a DBCLOB locator host variable. Use a SET assignment statement to assign a GRAPHIC or VARGRAPHIC transition variable to a DBCLOB locator host variable. Use a VALUES INTO statement to assign a GRAPHIC or VARGRAPHIC function parameter to a DBCLOB locator host variable. However, you cannot use a FETCH statement to assign a value in a GRAPHIC or VARGRAPHIC column to a DBCLOB locator host variable. Datetime data types are compatible with character host variable. A DATE, TIME, or TIMESTAMP column is compatible with a single-character, NUL-terminated, or VARCHAR structured form of a C character host variable. A BLOB column or a BLOB locator is compatible only with a BLOB host variable. The ROWID column is compatible only with a ROWID host variable. A host variable is compatible with a distinct type if the host variable type is compatible with the source type of the distinct type. For information about assigning and comparing distinct types, see Chapter 16, Creating and using distinct types, on page 367.
v v v
When necessary, DB2 automatically converts a fixed-length string to a varying-length string, or a varying-length string to a fixed-length string. Varying-length strings: For varying-length BIT data, use the VARCHAR structured form. Some C string manipulation functions process NUL-terminated strings and other functions process strings that are not NUL-terminated. The C string manipulation functions that process NUL-terminated strings cannot handle bit data because these functions might misinterpret a NUL character to be a NUL-terminator.
182
C
When your program uses X to assign a null value to a column, the program should set the indicator variable to a negative number. DB2 then assigns a null value to the column and ignores any value in X. For more information about indicator variables, see Using indicator variables with host variables on page 83. | | | | | Using indicator variable arrays: When you retrieve data into a host variable array, if a value in its indicator array is negative, you can disregard the contents of the corresponding element in the host variable array. For more information about indicator variable arrays, see Using indicator variable arrays with host variable arrays on page 87. Declaring indicator variables: You declare indicator variables in the same way as host variables. You can mix the declarations of the two types of variables in any way that seems appropriate. Example: The following example shows a FETCH statement with the declarations of the host variables that are needed for the FETCH statement:
EXEC SQL FETCH CLS_CURSOR INTO :ClsCd, :Day :DayInd, :Bgn :BgnInd, :End :EndInd;
Declaring indicator variable arrays: Figure 64 on page 184 shows the syntax for declarations of an indicator array or a host structure indicator array.
183
DSNTIAR syntax rc = dsntiar(&sqlca, &message, &lrecl); The DSNTIAR parameters have the following meanings: &sqlca An SQL communication area. &message An output area, in VARCHAR format, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240. The output lines of text, each line being the length specified in &lrecl, are put into this area. For example, you could specify the format of the output area as:
#define data_len 132 #define data_dim 10 struct error_struct { short int error_len; char error_text[data_dim][data_len]; . } error_message = {data_dim * data_len}; . . rc = dsntiar(&sqlca, &error_message, &data_len);
184
C
where error_message is the name of the message output area, data_dim is the number of lines in the message output area, and data_len is the length of each line. &lrecl A fullword containing the logical record length of output messages, between 72 and 240. To inform your compiler that DSNTIAR is an assembler language program, include one of the following statements in your application. For C, include:
#pragma linkage (dsntiar,OS)
Examples of calling DSNTIAR from an application appear in the DB2 sample C program DSN8BD3 and in the sample C++ program DSN8BE3. Both are in the library DSN8810.SDSNSAMP. See Appendix B, Sample applications, on page 1013 for instructions on how to access and print the source code for the sample programs.
CICS If your CICS application requires CICS storage handling, you must use the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
rc = DSNTIAC(&eib, &commarea, &sqlca, &message, &lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that use CICS commands. &eib EXEC interface block
&commarea communication area For more information on these parameters, see the appropriate application programming guide for CICS. The remaining parameter descriptions are the same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA in the same way. You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you must also define them in the CSD. For an example of CSD entry generation statements for use with DSNTIAC, see job DSNTEJ5A. The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
185
You can specify INCLUDE SQLCA or a declaration for SQLCODE wherever you can specify a 77 level or a record description entry in the WORKING-STORAGE
186
COBOL
SECTION. You can declare a stand-alone SQLCODE variable in either the WORKING-STORAGE SECTION or LINKAGE SECTION. See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE statement and Appendix D of DB2 SQL Reference for a complete description of SQLCA fields.
187
COBOL
Table 15. Allowable SQL statements for COBOL program sections (continued) SQL statement Notes: 1. When including host variable declarations, the INCLUDE statement must be in the WORKING-STORAGE SECTION or the LINKAGE SECTION. Program section
You cannot put SQL statements in the DECLARATIVES section of a COBOL program. Each SQL statement in a COBOL program must begin with EXEC SQL and end with END-EXEC. If the SQL statement appears between two COBOL statements, the period is optional and might not be appropriate. If the statement appears in an IF...THEN set of COBOL statements, omit the ending period to avoid inadvertently ending the IF statement. The EXEC and SQL keywords must appear on one line, but the remainder of the statement can appear on subsequent lines. You might code an UPDATE statement in a COBOL program as follows:
EXEC SQL UPDATE DSN8810.DEPT SET MGRNO = :MGR-NUM WHERE DEPTNO = :INT-DEPT END-EXEC.
# # # # # # | # #
Comments: You can include COBOL comment lines (* in column 7) in SQL statements wherever you can use a blank, except between the keywords EXEC and SQL. The precompiler also treats COBOL debugging and page-eject lines (/ in column 7) as comment lines. For an SQL INCLUDE statement, DB2 treats any text that follows the period after END-EXEC, and on the same line as END-EXEC, as a comment. In addition, you can include SQL comments in any embedded SQL statement. Debugging lines: The precompiler ignores the 'D' in column 7 on debugging lines and treats it as a blank. Continuation for SQL statements: The rules for continuing a character string constant from one line to the next in an SQL statement embedded in a COBOL program are the same as those for continuing a non-numeric literal in COBOL. However, you can use either a quote or an apostrophe as the first nonblank character in area B of the continuation line. The same rule applies for the continuation of delimited identifiers and does not depend on the string delimiter option. To conform with SQL standard, delimit a character string constant with an apostrophe, and use a quote as the first nonblank character in area B of the continuation line for a character string constant.
# #
COPY: Do not use a COBOL COPY statement within host variable declarations because the DB2 precompiler will not evaluate the statement. Declaring tables and views: Your COBOL program should include the statement DECLARE TABLE to describe each table and view the program accesses. You can use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
188
COBOL
statements. You should include the DCLGEN members in the DATA DIVISION. For more information, see Chapter 8, Generating declarations for your tables using DCLGEN, on page 131. Dynamic SQL in a COBOL program: In general, COBOL programs can easily handle dynamic SQL statements. COBOL programs can handle SELECT statements if the data types and the number of fields returned are fixed. If you want to use variable-list SELECT statements, use an SQLDA. See Defining SQL descriptor areas on page 187 for more information on SQLDA. Including code: To include SQL statements or COBOL host variable declarations from a member of a partitioned data set, use the following SQL statement in the source code where you want to include the statements:
EXEC SQL INCLUDE member-name END-EXEC.
If you are using the DB2 precompiler, you cannot nest SQL INCLUDE statements. In this case, do not use COBOL verbs to include SQL statements or host variable declarations, and do not use the SQL INCLUDE statement to include CICS preprocessor related code. In general, if you are using the DB2 precompiler, use the SQL INCLUDE statement only for SQL-related coding. If you are using the COBOL SQL coprocessor, none of these restrictions apply. # # # # Margins: You must code EXEC SQL in columns 12 through 72. Otherwise the DB2 precompiler does not recognize the SQL statement. Continued lines of an SQL statement can be in columns 8 through 72 when using the DB2 precompiler and columns 12 through 72 when using the DB2 coprocessor. Names: You can use any valid COBOL name for a host variable. Do not use external entry names or access plan names that begin with DSN, and do not use host variable names that begin with SQL. These names are reserved for DB2. Sequence numbers: The source statements that the DB2 precompiler generates do not include sequence numbers. Statement labels: You can precede executable SQL statements in the PROCEDURE DIVISION with a paragraph name, if you wish. WHENEVER statement: The target for the GOTO clause in an SQL statement WHENEVER must be a section name or unqualified paragraph name in the PROCEDURE DIVISION. Special COBOL considerations: The following considerations apply to programs written in COBOL: v In a COBOL program that uses elements in a multi-level structure as host variable names, the DB2 precompiler generates the lowest two-level names. v Using the COBOL compiler options DYNAM and NODYNAM depends on the operating environment.
189
COBOL
TSO and IMS You can specify the option DYNAM when compiling a COBOL program if you use the following guidelines. IMS and DB2 share a common alias name, DSNHLI, for the language interface module. You must do the following when you concatenate your libraries: If you use IMS with the COBOL option DYNAM, be sure to concatenate the IMS library first. If you run your application program only under DB2, be sure to concatenate the DB2 library first.
# # # # # # # # # # # # # # #
CICS and CAF You must specify the NODYNAM option when you compile a COBOL program that either includes CICS statements or is translated by a separate CICS translator or the integrated CICS translator. In these cases, you cannot specify the DYNAM option. If your CICS program has a subroutine that is not translated by a separate CICS translator or the integrated CICS translator but contains SQL statements, you can specify the DYNAM option. However, in this case, you must concatenate the CICS libraries before the DB2 libraries. You can compile COBOL stored procedures with either the DYNAM option or the NODYNAM option. If you use DYNAM, ensure that the correct DB2 language interface module is loaded dynamically by performing one of the following actions: Use the ATTACH(RRSAF) precompiler option. Copy the DSNRLI module into a load library that is concatenated in front of the DB2 libraries. Use the member name DSNHLI. v To avoid truncating numeric values, use either of the following methods: Use the COMP-5 data type for binary integer host variables. Specify the COBOL compiler option: - TRUNC(OPT) if you are certain that the data being moved to each binary variable by the application does not have a larger precision than is defined in the PICTURE clause of the binary variable. - TRUNC(BIN) if the precision of data being moved to each binary variable might exceed the value in the PICTURE clause. DB2 assigns values to binary integer host variables as if you had specified the COBOL compiler option TRUNC(BIN) or used the COMP-5 data type. v If a COBOL program contains several entry points or is called several times, the USING clause of the entry statement that executes before the first SQL statement executes must contain the SQLCA and all linkage section entries that any SQL statement uses as host variables. v If you use the DB2 precompiler, the REPLACE statement has no effect on SQL statements. It affects only the COBOL statements that the precompiler generates. If you use the DB2 coprocessor, the REPLACE statement replaces text strings in SQL statements as well as in generated COBOL statements. v If you use the DB2 precompiler, no compiler directives should appear between the PROCEDURE DIVISION and the DECLARATIVES statement.
| | # # | |
190
COBOL
v Do not use COBOL figurative constants (such as ZERO and SPACE), symbolic characters, reference modification, and subscripts within SQL statements. v Observe the rules in Chapter 2 of DB2 SQL Reference when you name SQL identifiers. However, for COBOL only, the names of SQL identifiers can follow the rules for naming COBOL words, if the names do not exceed the allowable length for the DB2 object. For example, the name 1ST-TIME is a valid cursor name because it is a valid COBOL word, but the name 1ST_TIME is not valid because it is not a valid SQL identifier or a valid COBOL word. v Observe these rules for hyphens: Surround hyphens used as subtraction operators with spaces. DB2 usually interprets a hyphen with no spaces around it as part of a host variable name. You can use hyphens in SQL identifiers under either of the following circumstances: - The application program is a local application that runs on DB2 UDB for OS/390 Version 6 or later. - The application program accesses remote sites, and the local site and remote sites are DB2 UDB for OS/390 Version 6 or later. v If you include an SQL statement in a COBOL PERFORM ... THRU paragraph and also specify the SQL statement WHENEVER ... GO, the COBOL compiler returns the warning message IGYOP3094. That message might indicate a problem. This usage is not recommended. v If you are using the DB2 precompiler and COBOL, the following additional restrictions apply: All SQL statements and any host variables they reference must be within the first program when using nested programs or batch compilation. DB2 COBOL programs must have a DATA DIVISION and a PROCEDURE DIVISION. Both divisions and the WORKING-STORAGE section must be present in programs that contain SQL statements. # # # # # # # # # # | | | | | | | | | | If your program uses parameters that are defined in LINKAGE SECTION as host variables to DB2 and the address of the input parameter might change on subsequent invocations of your program, your program must reset the variable SQL-INIT-FLAG. This flag is generated by the DB2 precompiler. Resetting this flag indicates that the storage must initialize when the next SQL statement executes. To reset the flag, insert the statement MOVE ZERO TO SQL-INIT-FLAG in the called programs PROCEDURE DIVISION, ahead of any executable SQL statements that use the host variables. If you use the COBOL DB2 coprocessor, the called program does not need to reset SQL-INIT-FLAG.
191
COBOL
| | | | | | | | | | | | A colon (:) must precede all host variables and all host variable arrays in an SQL statement. The names of host variables and host variable arrays should be unique within the source data set or member, even if the variables and variable arrays are in different blocks, classes, or procedures. You can qualify the names with a structure name to make them unique. An SQL statement that uses a host variable or host variable array must be within the scope of the statement that declares that variable or array. You define host variable arrays for use with multiple-row FETCH and INSERT statements. You can specify OCCURS when defining an indicator structure, a host variable array, or an indicator variable array. You cannot specify OCCURS for any other type of host variable.
01 77 level-1
variable-name IS USAGE
. IS VALUE numeric-constant
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. COMPUTATIONAL-1 and COMP-1 are equivalent. 3. COMPUTATIONAL-2 and COMP-2 are equivalent. Figure 66 on page 193 shows the syntax for declarations of integer and small integer host variables.
192
COBOL
IS 01 77 level-1 variable-name PICTURE PIC S9(4) S9999 S9(9) S999999999 . IS VALUE numeric-constant IS USAGE
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. The COBOL binary integer data types BINARY, COMPUTATIONAL, COMP, COMPUTATIONAL-4, and COMP-4 are equivalent. 3. COMPUTATIONAL-5 (and COMP-5) are equivalent to the other COBOL binary integer data types if you compile the other data types with TRUNC(BIN). 4. Any specification for scale is ignored. Figure 67 shows the syntax for declarations of decimal host variables.
. IS VALUE numeric-constant
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The picture-string that is that is associated with these types must have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9) or S9(i)V.
193
COBOL
3. The picture-string that is associated with SIGN LEADING SEPARATE must have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V with i instances of 9). Character host variables: The three valid forms of character host variables are: v Fixed-length strings v Varying-length strings v CLOBs The following figures show the syntax for forms other than CLOBs. See Figure 74 on page 198 for the syntax of CLOBs. Figure 68 shows the syntax for declarations of fixed-length character host variables.
| | | |
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. The picture-string that is associated with these forms must be X(m) (or XX...X, with m instances of X), with 1 <= m <= 32767 for fixed-length strings. However, the maximum length of the CHAR data type (fixed-length character string) in DB2 is 255 bytes. Figure 69 on page 195 shows the syntax for declarations of varying-length character host variables.
194
COBOL
01 level-1
variable-name
IS 49 var-1 PICTURE PIC S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
. IS VALUE numeric-constant
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with TRUNC(STD) only recognizes values up to 9999. This can cause data truncation errors when COBOL statements execute and might effectively limit the maximum length of variable-length character strings to 9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation. 3. For fixed-length strings, the picture-string must be X(m) (or XX...X, with m instances of X), with 1 <= m <= 32767; for other strings, m cannot be greater than the maximum size of a varying-length character string. 4. You cannot directly reference var-1 and var-2 as host variables. 5. You cannot use an intervening REDEFINE at level 49. Graphic character host variables: The three valid forms for graphic character host variables are: v Fixed-length strings v Varying-length strings v DBCLOBs The following figures show the syntax for forms other than DBCLOBs. See Figure 74 on page 198 for the syntax of DBCLOBs.
Chapter 9. Embedding SQL statements in host languages
195
COBOL
Figure 70 shows the syntax for declarations of fixed-length graphic host variables.
IS USAGE
. IS graphic-constant
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater than the maximum size of a varying-length graphic string. 3. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is supported only through the DB2 coprocessor. Figure 71 on page 197 shows the syntax for declarations of varying-length graphic host variables.
# # #
196
COBOL
01 level-1
variable-name
IS 49 var-1 PICTURE PIC S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
. IS VALUE numeric-constant
# # #
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with TRUNC(STD) only recognizes values up to 9999. This can cause data truncation errors when COBOL statements execute and might effectively limit the maximum length of variable-length character strings to 9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation. 3. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater than the maximum size of a varying-length graphic string. 4. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is supported only through the DB2 coprocessor. 5. You cannot directly reference var-1 and var-2 as host variables. Result set locators: Figure 72 on page 198 shows the syntax for declarations of result set locators. See Chapter 25, Using stored procedures for client/server processing, on page 629 for a discussion of how to use these host variables.
197
COBOL
01 variable-name IS USAGE
Table Locators: Figure 73 shows the syntax for declarations of table locators. See Accessing transition tables in a user-defined function or stored procedure on page 345 for a discussion of how to use these host variables.
01 level-1
variable-name IS USAGE
SQL TYPE IS
TABLE LIKE
table-name AS LOCATOR
Note: level-1 indicates a COBOL level between 2 and 48. LOB Variables and Locators: Figure 74 shows the syntax for declarations of BLOB, CLOB, and DBCLOB host variables and locators. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
01 level-1
variable-name IS USAGE
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB-LOCATOR CLOB-LOCATOR DBCLOB-LOCATOR
( length K M G
Note: level-1 indicates a COBOL level between 2 and 48. ROWIDs: Figure 75 shows the syntax for declarations of ROWID host variables. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
01 level-1
variable-name IS USAGE
198
COBOL
| | | | | | | | | | | | | |
| |
VALUE
. IS numeric-constant
| | | | | | | | | | | | | | |
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. COMPUTATIONAL-1 and COMP-1 are equivalent. 3. COMPUTATIONAL-2 and COMP-2 are equivalent. 4. dimension must be an integer constant between 1 and 32767. Figure 77 shows the syntax for declarations of integer and small integer host variable arrays.
| |
| | | | | |
Notes:
Chapter 9. Embedding SQL statements in host languages
199
COBOL
| | | | | | | | | | | 1. level-1 indicates a COBOL level between 2 and 48. 2. The COBOL binary integer data types BINARY, COMPUTATIONAL, COMP, COMPUTATIONAL-4, and COMP-4 are equivalent. 3. COMPUTATIONAL-5 (and COMP-5) are equivalent to the other COBOL binary integer data types if you compile the other data types with TRUNC(BIN). 4. Any specification for scale is ignored. 5. dimension must be an integer constant between 1 and 32767. Figure 78 shows the syntax for declarations of decimal host variable arrays.
# | # |
# #
# # # # Figure 78. Decimal host variable arrays | | | | | | | | | | | | | | | | | | | Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The picture-string that is associated with these types must have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9) or S9(i)V. 3. The picture-string that is associated with SIGN LEADING SEPARATE must have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V with i instances of 9). 4. dimension must be an integer constant between 1 and 32767. Character host variable arrays: The three valid forms of character host variable arrays are: v Fixed-length character strings v Varying-length character strings v CLOBs The following figures show the syntax for forms other than CLOBs. See Figure 83 on page 205 for the syntax of CLOBs. Figure 79 on page 201 shows the syntax for declarations of fixed-length character string arrays.
200
COBOL
| |
| |
| | | | | | | | | | | | | | |
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. The picture-string that is associated with these forms must be X(m) (or XX...X, with m instances of X), with 1 <= m <= 32767 for fixed-length strings. However, the maximum length of the CHAR data type (fixed-length character string) in DB2 is 255 bytes. 3. dimension must be an integer constant between 1 and 32767. Figure 80 on page 202 shows the syntax for declarations of varying-length character string arrays.
201
COBOL
| | | |
IS 49 var-1 PICTURE PIC S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
| |
. IS numeric-constant
| | |
IS 49 var-2 PICTURE PIC picture-string DISPLAY IS USAGE
| |
VALUE
. IS character-constant
| | | | Figure 80. Varying-length character string arrays | Notes: | 1. level-1 indicates a COBOL level between 2 and 48. | 2. DB2 uses the full length of the S9(4) BINARY variable even though COBOL | with TRUNC(STD) recognizes only values up to 9999. This can cause data | truncation errors when COBOL statements execute and might effectively limit | the maximum length of variable-length character strings to 9999. Consider | using the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data | truncation. | 3. The picture-string that is associated with these forms must be X(m) (or XX...X, | with m instances of X), with 1 <= m <= 32767 for fixed-length strings; for other | strings, m cannot be greater than the maximum size of a varying-length | character string. | 4. You cannot directly reference var-1 and var-2 as host variable arrays. | 5. You cannot use an intervening REDEFINE at level 49. | 6. dimension must be an integer constant between 1 and 32767. | | | | | | | | Example: The following example shows declarations of a fixed-length character array and a varying-length character array:
01 OUTPUT-VARS. 05 NAME OCCURS 10 TIMES. 49 NAME-LEN PIC S9(4) COMP-4 SYNC. 49 NAME-DATA PIC X(40). 05 SERIAL-NUMBER PIC S9(9) COMP-4 OCCURS 10 TIMES.
202
COBOL
| | | | | | | | | | | | | Graphic character host variable arrays: The three valid forms for graphic character host variable arrays are: v Fixed-length strings v Varying-length strings v DBCLOBs The following figures show the syntax for forms other than DBCLOBs. See Figure 83 on page 205 for the syntax of DBCLOBs. Figure 81 shows the syntax for declarations of fixed-length graphic string arrays.
IS DISPLAY-1 NATIONAL
| | | | | | | | | | # # # | | | |
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater than the maximum size of a varying-length graphic string. 3. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is supported only through the DB2 coprocessor. 4. dimension must be an integer constant between 1 and 32767. Figure 82 on page 204 shows the syntax for declarations of varying-length graphic string arrays.
203
COBOL
| | | |
level-1
IS 49 var-1 PICTURE PIC S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
| |
. IS numeric-constant
| | | | |
VALUE IS 49 var-2 PICTURE PIC picture-string USAGE IS DISPLAY-1 NATIONAL
. IS graphic-constant
| | | | Figure 82. Varying-length graphic string arrays | Notes: | 1. level-1 indicates a COBOL level between 2 and 48. | 2. DB2 uses the full length of the S9(4) BINARY variable even though COBOL | with TRUNC(STD) recognizes only values up to 9999. This can cause data | truncation errors when COBOL statements execute and might effectively limit | the maximum length of variable-length character strings to 9999. Consider | using the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data | truncation. | 3. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of | GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater | than the maximum size of a varying-length graphic string. | 4. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for # USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is # supported only through the DB2 coprocessor. # 5. You cannot directly reference var-1 and var-2 as host variable arrays. | 6. dimension must be an integer constant between 1 and 32767. | | | | | | LOB variable arrays and locators: Figure 83 on page 205 shows the syntax for declarations of BLOB, CLOB, and DBCLOB host variable arrays and locators. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use LOB variables.
204
COBOL
| | | |
level-1 variable-name IS USAGE BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR ( length
SQL TYPE IS
) K M G
| | | | | | | | | | | | | | | | | | | | |
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. dimension must be an integer constant between 1 and 32767. ROWIDs: Figure 84 shows the syntax for declarations of ROWID variable arrays. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
Notes: 1. level-1 indicates a COBOL level between 2 and 48. 2. dimension must be an integer constant between 1 and 32767.
When you write an SQL statement using a qualified host variable name (perhaps to identify a field within a structure), use the name of the structure followed by a period and the name of the field. For example, specify B.C1 rather than C1 OF B or C1 IN B.
Chapter 9. Embedding SQL statements in host languages
205
COBOL
The precompiler does not recognize host variables or host structures on any subordinate levels after one of these items: v A COBOL item that must begin in area A v Any SQL statement (except SQL INCLUDE) v Any SQL statement within an included member When the precompiler encounters one of the preceding items in a host structure, it considers the structure to be complete. Figure 85 shows the syntax for declarations of host structures.
level-1 variable-name .
level-2 var-1
numeric-usage . IS PICTURE integer-decimal-usage . PIC picture-string char-inner-variable . varchar-inner-variables vargraphic-inner-variables SQL TYPE IS ROWID . IS USAGE SQL TYPE IS TABLE LIKE table-name AS LOCATOR . IS USAGE LOB data type . IS USAGE
Figure 86 shows the syntax for numeric-usage items that are used within declarations of host structures.
IS USAGE
IS VALUE constant
Figure 87 on page 207 shows the syntax for integer and decimal usage items that are used within declarations of host structures.
206
COBOL
# |
IS USAGE
BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP PACKED-DECIMAL COMPUTATIONAL-3 COMP-3 IS DISPLAY NATIONAL SIGN LEADING SEPARATE CHARACTER
IS VALUE constant
Figure 88 shows the syntax for CHAR inner variables that are used within declarations of host structures.
IS VALUE constant
Figure 89 on page 208 shows the syntax for VARCHAR inner variables that are used within declarations of host structures.
207
COBOL
IS 49 var-2 PICTURE PIC S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
. IS VALUE numeric-constant
Figure 90 on page 209 shows the syntax for VARGRAPHIC inner variables that are used within declarations of host structures.
208
COBOL
IS 49 var-4 PICTURE PIC S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
. IS VALUE numeric-constant
# # #
Notes: 1. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of GG...G or NN...N), with 1 <= m <= 127; for other strings, m cannot be greater than the maximum size of a varying-length graphic string. 2. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is supported only through the DB2 coprocessor. Figure 91 shows the syntax for LOB variables and locators that are used within declarations of host structures.
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB-LOCATOR CLOB-LOCATOR DBCLOB-LOCATOR
( length K M G
Notes: 1. level-1 indicates a COBOL level between 1 and 47. 2. level-2 indicates a COBOL level between 2 and 48.
Chapter 9. Embedding SQL statements in host languages
209
COBOL
3. For elements within a structure, use any level 02 through 48 (rather than 01 or 77), up to a maximum of two levels. 4. Using a FILLER or optional FILLER item within a host structure declaration can invalidate the whole structure.
504
500 496 452 448 456 468 464 472 972 976 960 964 968 404
2 4 n n m m m m 4 4 4 4 4 n
210
COBOL
Table 16. SQL data types the precompiler uses for COBOL declarations (continued) COBOL data type USAGE IS SQL TYPE IS CLOB(n) 1n2147483647 USAGE IS SQL TYPE IS DBCLOB(m) 1m10737418232 SQL TYPE IS ROWID Notes: 1. Do not use this data type as a column type. 2. m is the number of double-byte characters. SQLTYPE of host variable 408 412 904 SQLLEN of host variable n n 40 SQL data type CLOB(n) DBCLOB(m)2 ROWID
Table 17 helps you define host variables that receive output from the database. You can use the table to determine the COBOL data type that is equivalent to a given SQL data type. For example, if you retrieve TIMESTAMP data, you can use the table to define a suitable host variable in the program that receives the data value. Table 17 shows direct conversions between DB2 data types and host data types. However, a number of DB2 data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 does conversions between those compatible data types. See Table 1 on page 5 for information on compatible data types.
Table 17. SQL data types mapped to typical COBOL declarations SQL data type SMALLINT COBOL data type S9(4) COMP-4, S9(4) COMP-5, S9(4) COMP, or S9(4) BINARY S9(9) COMP-4, S9(9) COMP-5, S9(9) COMP, or S9(9) BINARY S9(p-s)V9(s) COMP-3 or S9(p-s)V9(s) PACKED-DECIMAL DISPLAY SIGN LEADING SEPARATE NATIONAL SIGN LEADING SEPARATE COMP-1 COMP-2 p is precision; s is scale. 0<=s<=p<=31. If s=0, use S9(p)V or S9(p). If s=p, use SV9(s). If the COBOL compiler does not support 31digit decimal numbers, no exact equivalent exists. Use COMP-2. Notes
INTEGER
DECIMAL(p,s) or NUMERIC(p,s)
# #
REAL or FLOAT (n) DOUBLE PRECISION, DOUBLE or FLOAT (n) CHAR(n) VARCHAR(n)
1<=n<=21 22<=n<=53
Fixed-length character string. For example, 1<=n<=255 01 VAR-NAME PIC X(n). Varying-length character string. For example, 01 VAR-NAME. 49 VAR-LEN PIC S9(4) USAGE BINARY. 49 VAR-TEXT PIC X(n). The inner variables must have a level of 49.
211
COBOL
Table 17. SQL data types mapped to typical COBOL declarations (continued) SQL data type GRAPHIC(n) COBOL data type Fixed-length graphic string. For example, 01 VAR-NAME PIC G(n) USAGE IS DISPLAY-1. VARGRAPHIC(n) Varying-length graphic string. For example, 01 VAR-NAME. 49 VAR-LEN PIC S9(4) USAGE BINARY. 49 VAR-TEXT PIC G(n) USAGE IS DISPLAY-1. DATE Fixed-length character string of length n. For example, 01 VAR-NAME PIC X(n). TIME Fixed-length character string of length n. For example, 01 VAR-NAME PIC X(n). TIMESTAMP Fixed-length character string of length of length n. For example, 01 VAR-NAME PIC X(n). Result set locator SQL TYPE IS RESULT-SET-LOCATOR SQL TYPE IS TABLE LIKE table-name AS LOCATOR USAGE IS SQL TYPE IS BLOB-LOCATOR USAGE IS SQL TYPE IS CLOB-LOCATOR USAGE IS SQL TYPE IS DBCLOB-LOCATOR USAGE IS SQL TYPE IS BLOB(n) USAGE IS SQL TYPE IS CLOB(n) USAGE IS SQL TYPE IS DBCLOB(n) SQL TYPE IS ROWID Notes n refers to the number of double-byte characters, not to the number of bytes. 1<=n<=127 n refers to the number of double-byte characters, not to the number of bytes. The inner variables must have a level of 49.
If you are using a date exit routine, n is determined by that routine. Otherwise, n must be at least 10. If you are using a time exit routine, n is determined by that routine. Otherwise, n must be at least 6; to include seconds, n must be at least 8. n must be at least 19. To include microseconds, n must be 26; if n is less than 26, truncation occurs on the microseconds part. Use this data type only for receiving result sets. Do not use this data type as a column type. Use this data type only in a user-defined function or stored procedure to receive rows of a transition table. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. 1n2147483647 1n2147483647 n is the number of double-byte characters. 1n1073741823
Table locator
BLOB locator
CLOB locator
DBCLOB locator
212
COBOL
# # # # # # # # # # # # # # # Controlling the CCSID: IBM Enterprise COBOL for z/OS Version 3 Release 2 or later, and the DB2 coprocessor for the COBOL compiler, support: v The NATIONAL data type that is used for declaring Unicode values in the UTF-16 format (that is, CCSID 1200) v The COBOL CODEPAGE compiler option that is used to specify the default EBCDIC CCSID of character data items You can use the NATIONAL data type and the CODEPAGE compiler option to control the CCSID of the character host variables in your application. For example, if you declare the host variable HV1 as USAGE NATIONAL, then DB2 handles HV1 as if you had used this DECLARE VARIABLE statement:
DECLARE :HV1 VARIABLE CCSID 1200
In addition, the COBOL DB2 coprocessor uses the CCSID that is specified in the CODEPAGE compiler option to indicate that all host variables of character data type, other than NATIONAL, are specified with that CCSID unless they are explicitly overridden by a DECLARE VARIABLE statement. Example: Assume that the COBOL CODEPAGE compiler option is specified as CODEPAGE(1234). The following code shows how you can control the CCSID:
DATA DIVISION. 01 HV1 PIC N(10) USAGE NATIONAL. 01 HV2 PIC X(20) USAGE DISPLAY. 01 HV3 PIC X(30) USAGE DISPLAY. ... EXEC SQL DECLARE :HV3 VARIABLE CCSID 1047 END-EXEC. ... PROCEDURE DIVISION. ... EXEC SQL SELECT C1, C2, C3 INTO :HV1, :HV2, :HV3 FROM T1 END-EXEC.
The CCSID for each of these host variables is: HV1 HV2 HV3 1200 1234 1047
SQL data types with no COBOL equivalent: If you are using a COBOL compiler that does not support decimal numbers of more than 18 digits, use one of the following data types to hold values of greater than 18 digits: v A decimal variable with a precision less than or equal to 18, if the actual data values fit. If you retrieve a decimal value into a decimal variable with a scale that is less than the source column in the database, the fractional part of the value might be truncated. v An integer or a floating-point variable, which converts the value. If you choose integer, you lose the fractional part of the number. If the decimal number might exceed the maximum value for an integer, or if you want to preserve a fractional value, you can use floating-point numbers. Floating-point numbers are approximations of real numbers. Therefore, when you assign a decimal number to a floating-point variable, the result might be different from the original number.
213
COBOL
v A character-string host variable. Use the CHAR function to retrieve a decimal value into it. Special purpose COBOL data types: The locator data types are COBOL data types and SQL data types. You cannot use locators as column types. For information on how to use these data types, see the following sections: Result set locator Chapter 25, Using stored procedures for client/server processing, on page 629 Table locator LOB locators Accessing transition tables in a user-defined function or stored procedure on page 345 Chapter 14, Programming for large objects, on page 297
Level 77 data description entries: One or more REDEFINES entries can follow any level 77 data description entry. However, you cannot use the names in these entries in SQL statements. Entries with the name FILLER are ignored. SMALLINT and INTEGER data types: In COBOL, you declare the SMALLINT and INTEGER data types as a number of decimal digits. DB2 uses the full size of the integers (in a way that is similar to processing with the TRUNC(BIN) compiler option) and can place larger values in the host variable than would be allowed in the specified number of digits in the COBOL declaration. If you compile with TRUNC(OPT) or TRUNC(STD), ensure that the size of numbers in your application is within the declared number of digits. For small integers that can exceed 9999, use S9(4) COMP-5 or compile with TRUNC(BIN). For large integers that can exceed 999 999 999, use S9(10) COMP-3 to obtain the decimal data type. If you use COBOL for integers that exceed the COBOL PICTURE, specify the column as decimal to ensure that the data types match and perform well. Overflow: Be careful of overflow. For example, suppose you retrieve an INTEGER column value into a PICTURE S9(4) host variable and the column value is larger than 32767 or smaller than -32768. You get an overflow warning or an error, depending on whether you specify an indicator variable. VARCHAR and VARGRAPHIC data types: If your varying-length character host variables receive values whose length is greater than 9999 characters, compile the applications in which you use those host variables with the option TRUNC(BIN). TRUNC(BIN) lets the length field for the character string receive a value of up to 32767. Truncation: Be careful of truncation. For example, if you retrieve an 80-character CHAR column value into a PICTURE X(70) host variable, the rightmost 10 characters of the retrieved string are truncated. Retrieving a double precision floating-point or decimal column value into a PIC S9(8) COMP host variable removes any fractional part of the value. Similarly, retrieving a column value with DECIMAL data type into a COBOL decimal variable with a lower precision might truncate the value.
214
COBOL
v Numeric data types are compatible with each other. See Table 17 on page 211 for the COBOL data types that are compatible with the SQL data types SMALLINT, INTEGER, DECIMAL, REAL, and DOUBLE PRECISION. v Character data types are compatible with each other. A CHAR, VARCHAR, or CLOB column is compatible with a fixed-length or varying-length COBOL character host variable. v Character data types are partially compatible with CLOB locators. You can perform the following assignments: Assign a value in a CLOB locator to a CHAR or VARCHAR column Use a SELECT INTO statement to assign a CHAR or VARCHAR column to a CLOB locator host variable. Assign a CHAR or VARCHAR output parameter from a user-defined function or stored procedure to a CLOB locator host variable. Use a SET assignment statement to assign a CHAR or VARCHAR transition variable to a CLOB locator host variable. Use a VALUES INTO statement to assign a CHAR or VARCHAR function parameter to a CLOB locator host variable. However, you cannot use a FETCH statement to assign a value in a CHAR or VARCHAR column to a CLOB locator host variable. v Graphic data types are compatible with each other. A GRAPHIC, VARGRAPHIC, or DBCLOB column is compatible with a fixed-length or varying-length COBOL graphic string host variable. v Graphic data types are partially compatible with DBCLOB locators. You can perform the following assignments: Assign a value in a DBCLOB locator to a GRAPHIC or VARGRAPHIC column Use a SELECT INTO statement to assign a GRAPHIC or VARGRAPHIC column to a DBCLOB locator host variable. Assign a GRAPHIC or VARGRAPHIC output parameter from a user-defined function or stored procedure to a DBCLOB locator host variable. Use a SET assignment statement to assign a GRAPHIC or VARGRAPHIC transition variable to a DBCLOB locator host variable. Use a VALUES INTO statement to assign a GRAPHIC or VARGRAPHIC function parameter to a DBCLOB locator host variable. However, you cannot use a FETCH statement to assign a value in a GRAPHIC or VARGRAPHIC column to a DBCLOB locator host variable. Datetime data types are compatible with character host variables. A DATE, TIME, or TIMESTAMP column is compatible with a fixed-length or varying length COBOL character host variable. A BLOB column or a BLOB locator is compatible only with a BLOB host variable. The ROWID column is compatible only with a ROWID host variable. A host variable is compatible with a distinct type if the host variable type is compatible with the source type of the distinct type. For information on assigning and comparing distinct types, see Chapter 16, Creating and using distinct types, on page 367.
v v v
When necessary, DB2 automatically converts a fixed-length string to a varying-length string, or a varying-length string to a fixed-length string.
215
COBOL
Figure 92 on page 217 shows the syntax for declarations of indicator variables.
216
COBOL
IS 01 77 variable-name PICTURE PIC S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
. IS VALUE constant
Declaring indicator variable arrays: Figure 93 shows the syntax for valid indicator array declarations.
IS level-1 variable-name PICTURE PIC S9(4) S9999 USAGE BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP OCCURS dimension TIMES VALUE IS constant . IS
Notes: 1. level-1 must be an integer between 2 and 48. 2. dimension must be an integer constant between 1 and 32767.
217
COBOL
DSNTIAR syntax CALL DSNTIAR USING sqlca message lrecl. The DSNTIAR parameters have the following meanings: sqlca An SQL communication area. message An output area, in VARCHAR format, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240. The output lines of text, each line being the length specified in lrecl, are put into this area. For example, you could specify the format of the output area as:
01 ERROR-MESSAGE. 02 ERROR-LEN 02 ERROR-TEXT PIC S9(4) COMP VALUE +1320. PIC X(132) OCCURS 10 TIMES INDEXED BY ERROR-INDEX. PIC S9(9) COMP VALUE +132.
where ERROR-MESSAGE is the name of the message output area containing 10 lines of length 132 each, and ERROR-TEXT-LEN is the length of each line. lrecl A fullword containing the logical record length of output messages, between 72 and 240. An example of calling DSNTIAR from an application appears in the DB2 sample assembler program DSN8BC3, which is contained in the library DSN8810.SDSNSAMP. See Appendix B, Sample applications, on page 1013 for instructions on how to access and print the source code for the sample program.
218
COBOL
CICS If you call DSNTIAR dynamically from a CICS COBOL application program, be sure you do the following: v Compile the COBOL application with the NODYNAM option. v Define DSNTIAR in the CSD. If your CICS application requires CICS storage handling, you must use the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC USING eib commarea sqlca msg lrecl.
DSNTIAC has extra parameters, which you must use for calls to routines that use CICS commands. eib commarea EXEC interface block communication area
For more information on these parameters, see the appropriate application programming guide for CICS. The remaining parameter descriptions are the same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA in the same way. You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you must also define them in the CSD. For an example of CSD entry generation statements for use with DSNTIAC, see job DSNTEJ5A. The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
219
COBOL
Rules for host variables: You can declare COBOL variables that are used as host variables in the WORKING-STORAGE SECTION or LINKAGE-SECTION of a program, class, or method. You can also declare host variables in the LOCAL-STORAGE SECTION of a method. The scope of a host variable is the method, class, or program within which it is defined.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE statement and Appendix D of DB2 SQL Reference for a complete description of SQLCA fields.
220
FORTRAN
v v v v v v v v DESCRIBE CURSOR host-variable INTO descriptor-name DESCRIBE INPUT statement-name INTO descriptor-name DESCRIBE PROCEDURE host-variable INTO descriptor-name DESCRIBE TABLE host-variable INTO descriptor-name EXECUTE...USING DESCRIPTOR descriptor-name FETCH...USING DESCRIPTOR descriptor-name OPEN...USING DESCRIPTOR descriptor-name PREPARE...INTO descriptor-name
Unlike the SQLCA, a program can have more than one SQLDA, and an SQLDA can have any valid name. DB2 does not support the INCLUDE SQLDA statement for Fortran programs. If present, an error message results. A Fortran program can call a subroutine (written in C, PL/I or assembler language) that uses the INCLUDE SQLDA statement to define the SQLDA and that also includes the necessary SQL statements for the dynamic SQL functions you want to perform. See Chapter 24, Coding dynamic SQL in application programs, on page 593 for more information about dynamic SQL. You must place SQLDA declarations before the first SQL statement that references the data descriptor.
You cannot follow an SQL statement with another SQL statement or Fortran statement on the same line. Fortran does not require blanks to delimit words within a statement, but the SQL language requires blanks. The rules for embedded SQL follow the rules for SQL syntax, which require you to use one or more blanks as a delimiter. Comments: You can include Fortran comment lines within embedded SQL statements wherever you can use a blank, except between the keywords EXEC and SQL. You can include SQL comments in any embedded SQL statement. The DB2 precompiler does not support the exclamation point (!) as a comment recognition character in Fortran programs. Continuation for SQL statements: The line continuation rules for SQL statements are the same as those for Fortran statements, except that you must specify EXEC
Chapter 9. Embedding SQL statements in host languages
221
FORTRAN
SQL on one line. The SQL examples in this section have Cs in the sixth column to indicate that they are continuations of EXEC SQL. Declaring tables and views: Your Fortran program should also include the DECLARE TABLE statement to describe each table and view the program accesses. Dynamic SQL in a Fortran program: In general, Fortran programs can easily handle dynamic SQL statements. SELECT statements can be handled if the data types and the number of returned fields are fixed. If you want to use variable-list SELECT statements, you need to use an SQLDA, as described in Defining SQL descriptor areas on page 220. You can use a Fortran character variable in the statements PREPARE and EXECUTE IMMEDIATE, even if it is fixed-length. Including code: To include SQL statements or Fortran host variable declarations from a member of a partitioned data set, use the following SQL statement in the source code where you want to include the statements:
EXEC SQL INCLUDE member-name
You cannot nest SQL INCLUDE statements. You cannot use the Fortran INCLUDE compiler directive to include SQL statements or Fortran host variable declarations. Margins: Code the SQL statements between columns 7 through 72, inclusive. If EXEC SQL starts before the specified left margin, the DB2 precompiler does not recognize the SQL statement. Names: You can use any valid Fortran name for a host variable. Do not use external entry names that begin with DSN or host variable names that begin with SQL. These names are reserved for DB2. Do not use the word DEBUG, except when defining a Fortran DEBUG packet. Do not use the words FUNCTION, IMPLICIT, PROGRAM, and SUBROUTINE to define variables. Sequence numbers: The source statements that the DB2 precompiler generates do not include sequence numbers. Statement labels: You can specify statement numbers for SQL statements in columns 1 to 5. However, during program preparation, a labeled SQL statement generates a Fortran CONTINUE statement with that label before it generates the code that executes the SQL statement. Therefore, a labeled SQL statement should never be the last statement in a DO loop. In addition, you should not label SQL statements (such as INCLUDE and BEGIN DECLARE SECTION) that occur before the first executable SQL statement, because an error might occur. WHENEVER statement: The target for the GOTO clause in the SQL WHENEVER statement must be a label in the Fortran source code and must refer to a statement in the same subprogram. The WHENEVER statement only applies to SQL statements in the same subprogram. Special Fortran considerations: The following considerations apply to programs written in Fortran: v You cannot use the @PROCESS statement in your source code. Instead, specify the compiler options in the PARM field.
222
FORTRAN
v You cannot use the SQL INCLUDE statement to include the following statements: PROGRAM, SUBROUTINE, BLOCK, FUNCTION, or IMPLICIT. DB2 supports Version 3 Release 1 (or later) of VS Fortran with the following restrictions: v The parallel option is not supported. Applications that contain SQL statements must not use Fortran parallelism. v You cannot use the byte data type within embedded SQL, because byte is not a recognizable host data type.
223
FORTRAN
Character host variables: Figure 95 shows the syntax for declarations of character host variables other than CLOBs. See Figure 97 for the syntax of CLOBs.
Result set locators: Figure 96 shows the syntax for declarations of result set locators. See Chapter 25, Using stored procedures for client/server processing, on page 629 for a discussion of how to use these host variables.
LOB Variables and Locators: Figure 97 shows the syntax for declarations of BLOB and CLOB host variables and locators. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB BLOB_LOCATOR CLOB_LOCATOR
( length K M G
variable-name
ROWIDs: Figure 98 on page 225 shows the syntax for declarations of ROWID variables. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
224
FORTRAN
Table 19 helps you define host variables that receive output from the database. You can use the table to determine the Fortran data type that is equivalent to a given SQL data type. For example, if you retrieve TIMESTAMP data, you can use the table to define a suitable host variable in the program that receives the data value. Table 19 shows direct conversions between DB2 data types and host data types. However, a number of DB2 data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 does conversions between those compatible data types. See Table 1 on page 5 for information on compatible data types.
Table 19. SQL data types mapped to typical Fortran declarations SQL data type SMALLINT INTEGER Fortran equivalent INTEGER*2 INTEGER*4 Notes
225
FORTRAN
Table 19. SQL data types mapped to typical Fortran declarations (continued) SQL data type DECIMAL(p,s) or NUMERIC(p,s) FLOAT(n) single precision FLOAT(n) double precision CHAR(n) VARCHAR(n) Fortran equivalent no exact equivalent REAL*4 REAL*8 CHARACTER*n no exact equivalent Notes Use REAL*8 1<=n<=21 22<=n<=53 1<=n<=255 Use a character host variable that is large enough to contain the largest expected VARCHAR value.
not supported not supported CHARACTER*n If you are using a date exit routine, n is determined by that routine; otherwise, n must be at least 10. If you are using a time exit routine, n is determined by that routine. Otherwise, n must be at least 6; to include seconds, n must be at least 8. n must be at least 19. To include microseconds, n must be 26; if n is less than 26, truncation occurs on the microseconds part. Use this data type only for receiving result sets. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type.
TIME
CHARACTER*n
TIMESTAMP
CHARACTER*n
BLOB locator
CLOB locator
not supported SQL TYPE IS BLOB(n) SQL TYPE IS CLOB(n) not supported SQL TYPE IS ROWID 1n2147483647 1n2147483647
226
FORTRAN
v An integer or floating-point variable, which converts the value. If you choose integer, however, you lose the fractional part of the number. If the decimal number can exceed the maximum value for an integer or you want to preserve a fractional value, you can use floating-point numbers. Floating-point numbers are approximations of real numbers. When you assign a decimal number to a floating-point variable, the result could be different from the original number. v A character string host variable. Use the CHAR function to retrieve a decimal value into it. Special-purpose Fortran data types: The locator data types are Fortran data types and SQL data types. You cannot use locators as column types. For information on how to use these data types, see the following sections: Result set locator Chapter 25, Using stored procedures for client/server processing, on page 629 LOB locators Chapter 14, Programming for large objects, on page 297
Overflow: Be careful of overflow. For example, if you retrieve an INTEGER column value into a INTEGER*2 host variable and the column value is larger than 32767 or -32768, you get an overflow warning or an error, depending on whether you provided an indicator variable. Truncation: Be careful of truncation. For example, if you retrieve an 80-character CHAR column value into a CHARACTER*70 host variable, the rightmost ten characters of the retrieved string are truncated. Retrieving a double-precision floating-point or decimal column value into an INTEGER*4 host variable removes any fractional value. Processing Unicode data: Because Fortran does not support graphic data types, Fortran applications can process only Unicode tables that use UTF-8 encoding.
227
FORTRAN
v Character data types are partially compatible with CLOB locators. You can perform the following assignments: Assign a value in a CLOB locator to a CHAR or VARCHAR column Use a SELECT INTO statement to assign a CHAR or VARCHAR column to a CLOB locator host variable. Assign a CHAR or VARCHAR output parameter from a user-defined function or stored procedure to a CLOB locator host variable. Use a SET assignment statement to assign a CHAR or VARCHAR transition variable to a CLOB locator host variable. Use a VALUES INTO statement to assign a CHAR or VARCHAR function parameter to a CLOB locator host variable. However, you cannot use a FETCH statement to assign a value in a CHAR or VARCHAR column to a CLOB locator host variable. Datetime data types are compatible with character host variables. A DATE, TIME, or TIMESTAMP column is compatible with a Fortran character host variable. A BLOB column or a BLOB locator is compatible only with a BLOB host variable. The ROWID column is compatible only with a ROWID host variable. A host variable is compatible with a distinct type if the host variable type is compatible with the source type of the distinct type. For information on assigning and comparing distinct types, see Chapter 16, Creating and using distinct types, on page 367.
v v v
228
FORTRAN
Figure 99 shows the syntax for declarations of indicator variables.
DSNTIR syntax CALL DSNTIR ( error-length, message, return-code ) The DSNTIR parameters have the following meanings: error-length The total length of the message output area. message An output area, in VARCHAR format, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240. The output lines of text are put into this area. For example, you could specify the format of the output area as:
INTEGER ERRLEN /1320/ CHARACTER*132 ERRTXT(10) INTEGER ICODE . . . CALL DSNTIR ( ERRLEN, ERRTXT, ICODE )
where ERRLEN is the total length of the message output area, ERRTXT is the name of the message output area, and ICODE is the return code. return-code Accepts a return code from DSNTIAR. An example of calling DSNTIR (which then calls DSNTIAR) from an application appears in the DB2 sample assembler program DSN8BF3, which is contained in the
229
FORTRAN
library DSN8810.SDSNSAMP. See Appendix B, Sample applications, on page 1013 for instructions on how to access and print the source code for the sample program.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE statement and Appendix D of DB2 SQL Reference for a complete description of SQLCA fields.
230
PL/I
v v v v v DESCRIBE TABLE host-variable INTO descriptor-name EXECUTE ... USING DESCRIPTOR descriptor-name FETCH ... USING DESCRIPTOR descriptor-name OPEN ... USING DESCRIPTOR descriptor-name PREPARE ... INTO descriptor-name
Unlike the SQLCA, a program can have more than one SQLDA, and an SQLDA can have any valid name. You can code an SQLDA in a PL/I program, either directly or by using the SQL INCLUDE statement. Using the SQL INCLUDE statement requests a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA;
You must declare an SQLDA before the first SQL statement that references that data descriptor, unless you use the precompiler option TWOPASS. See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE statement and Appendix E of DB2 SQL Reference for a complete description of SQLDA fields.
| |
Comments: You can include PL/I comments in embedded SQL statements wherever you can use a blank, except between the keywords EXEC and SQL. You can also include SQL comments in any SQL statement. To include DBCS characters in comments, you must delimit the characters by a shift-out and shift-in control character; the first shift-in character in the DBCS string signals the end of the DBCS string. Continuation for SQL statements: The line continuation rules for SQL statements are the same as those for other PL/I statements, except that you must specify EXEC SQL on one line. Declaring tables and views: Your PL/I program should include a DECLARE TABLE statement to describe each table and view the program accesses. You can use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE statements. For more information, see Chapter 8, Generating declarations for your tables using DCLGEN, on page 131. Including code: You can use SQL statements or PL/I host variable declarations from a member of a partitioned data set by using the following SQL statement in the source code where you want to include the statements:
Chapter 9. Embedding SQL statements in host languages
231
PL/I
EXEC SQL INCLUDE member-name;
You cannot nest SQL INCLUDE statements. Do not use the PL/I %INCLUDE statement to include SQL statements or host variable DCL statements. You must use the PL/I preprocessor to resolve any %INCLUDE statements before you use the DB2 precompiler. Do not use PL/I preprocessor directives within SQL statements. Margins: Code SQL statements in columns 2 through 72, unless you have specified other margins to the DB2 precompiler. If EXEC SQL starts before the specified left margin, the DB2 precompiler does not recognize the SQL statement. Names: You can use any valid PL/I name for a host variable. Do not use external entry names or access plan names that begin with DSN, and do not use host variable names that begin with SQL. These names are reserved for DB2. Sequence numbers: The source statements that the DB2 precompiler generates do not include sequence numbers. IEL0378I messages from the PL/I compiler identify lines of code without sequence numbers. You can ignore these messages. Statement labels: You can specify a statement label for executable SQL statements. However, the INCLUDE text-file-name and END DECLARE SECTION statements cannot have statement labels. Whenever statement: The target for the GOTO clause in an SQL statement WHENEVER must be a label in the PL/I source code and must be within the scope of any SQL statements that WHENEVER affects. Using double-byte character set (DBCS) characters: The following considerations apply to using DBCS in PL/I programs with SQL statements: v If you use DBCS in the PL/I source, DB2 rules for the following language elements apply: Graphic strings Graphic string constants Host identifiers Mixed data in character strings MIXED DATA option See Chapter 2 of DB2 SQL Reference for detailed information about these language elements. v The PL/I preprocessor transforms the format of DBCS constants. If you do not want that transformation, run the DB2 precompiler before the preprocessor. v If you use graphic string constants or mixed data in dynamically prepared SQL statements, and if your application requires the PL/I Version 2 (or later) compiler, the dynamically prepared statements must use the PL/I mixed constant format. If you prepare the statement from a host variable, change the string assignment to a PL/I mixed string. If you prepare the statement from a PL/I string, change that to a host variable, and then change the string assignment to a PL/I mixed string. Example:
SQLSTMT = SELECT <dbdb> FROM table-nameM; EXEC SQL PREPARE STMT FROM :SQLSTMT;
232
PL/I
For instructions on preparing SQL statements dynamically, see Chapter 24, Coding dynamic SQL in application programs, on page 593. v If you want a DBCS identifier to resemble a PL/I graphic string, you must use a delimited identifier. v If you include DBCS characters in comments, you must delimit the characters with a shift-out and shift-in control character. The first shift-in character signals the end of the DBCS string. v You can declare host variable names that use DBCS characters in PL/I application programs. The rules for using DBCS variable names in PL/I follow existing rules for DBCS SQL ordinary identifiers, except for length. The maximum length for a host variable is 128 Unicode bytes in DB2. See Chapter 2 of DB2 SQL Reference for the rules for DBCS SQL ordinary identifiers. Restrictions: DBCS variable names must contain DBCS characters only. Mixing single-byte character set (SBCS) characters with DBCS characters in a DBCS variable name produces unpredictable results. A DBCS variable name cannot continue to the next line. v The PL/I preprocessor changes non-Kanji DBCS characters into extended binary coded decimal interchange code (EBCDIC) SBCS characters. To avoid this change, use Kanji DBCS characters for DBCS variable names, or run the PL/I compiler without the PL/I preprocessor. Special PL/I considerations: The following considerations apply to programs written in PL/I: v When compiling a PL/I program that includes SQL statements, you must use the PL/I compiler option CHARSET (60 EBCDIC). v In unusual cases, the generated comments in PL/I can contain a semicolon. The semicolon generates compiler message IEL0239I, which you can ignore. v The generated code in a PL/I declaration can contain the ADDR function of a field defined as character varying. This produces either message IBM105l l or IBM1180l W, both of which you can ignore. v The precompiler generated code in PL/I source can contain the NULL() function. This produces message IEL0533I, which you can ignore unless you also use NULL as a PL/I variable. If you use NULL as a PL/I variable in a DB2 application, you must also declare NULL as a built-in function (DCL NULL BUILTIN;) to avoid PL/I compiler errors. v The PL/I macro processor can generate SQL statements or host variable DCL statements if you run the macro processor before running the DB2 precompiler. If you use the PL/I macro processor, do not use the PL/I *PROCESS statement in the source to pass options to the PL/I compiler. You can specify the needed options on the COPTION parameter of the DSNH command or the option PARM.PLI=options of the EXEC statement in the DSNHPLI procedure. v Using the PL/I multitasking facility, in which multiple tasks execute SQL statements, causes unpredictable results. See the RUN(DSN) command in Part 3 of DB2 Command Reference.
| |
| | | | |
233
PL/I
| | | | | | | | | | | | | | | | | | | | | | You can precede PL/I statements that define the host variables and host variable arrays with the BEGIN DECLARE SECTION statement, and follow the statements with the END DECLARE SECTION statement. You must use the BEGIN DECLARE SECTION and END DECLARE SECTION statements when you use the precompiler option STDSQL(YES). A colon (:) must precede all host variables and host variable arrays in an SQL statement, with the following exception. If the SQL statement meets the following conditions, a host variable or host variable array in the SQL statement cannot be preceded by a colon: v The SQL statement is an EXECUTE IMMEDIATE or PREPARE statement. v The SQL statement is in a program that also contains a DECLARE VARIABLE statement. v The host variable is part of a string expression, but the host variable is not the only component of the string expression. The names of host variables and host variable arrays should be unique within the program, even if the variables and variable arrays are in different blocks or procedures. You can qualify the names with a structure name to make them unique. An SQL statement that uses a host variable or host variable array must be within the scope of the statement that declares that variable or array. You define host variable arrays for use with multiple-row FETCH and multiple-row INSERT statements.
234
PL/I
DECLARE DCL
variable-name , ( variable-name )
FIXED ( precision ,scale FLOAT ( precision ) ) Alignment and/or Scope and/or Storage
Notes: 1. You can specify host variable attributes in any order that is acceptable to PL/I. For example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable. 2. You can specify a scale only for DECIMAL FIXED. Character host variables: Figure 101 shows the syntax for declarations of character host variables, other than CLOBs. See Figure 105 on page 236 for the syntax of CLOBs.
DECLARE DCL
variable-name , ( variable-name )
CHARACTER CHAR
Graphic host variables: Figure 102 shows the syntax for declarations of graphic host variables, other than DBCLOBs. See Figure 105 on page 236 for the syntax of DBCLOBs.
DECLARE DCL
variable-name , ( variable-name )
Result set locators: Figure 103 on page 236 shows the syntax for declarations of result set locators. See Chapter 25, Using stored procedures for client/server processing, on page 629 for a discussion of how to use these host variables.
235
PL/I
DECLARE DCL
variable-name , ( variable-name )
Table locators: Figure 104 shows the syntax for declarations of table locators. See Accessing transition tables in a user-defined function or stored procedure on page 345 for a discussion of how to use these host variables.
DCL DECLARE
variable-name , ( variable-name )
LOB variables and locators: Figure 105 shows the syntax for declarations of BLOB, CLOB, and DBCLOB host variables and locators. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables. A single PL/I declaration that contains a LOB variable declaration is limited to no more than 1000 lines of source code.
DCL DECLARE
variable-name , ( variable-name )
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR
( length K M G
Note: Variable attributes such as STATIC and AUTOMATIC are ignored if specified on a LOB variable declaration. ROWIDs: Figure 106 on page 237 shows the syntax for declarations of ROWID host variables. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
236
PL/I
DCL DECLARE
variable-name , ( variable-name )
| | | | | | | | | | | | | | | | | # # # | | | | |
DECLARE DCL
variable-name , ( , ( variable-name )
( dimension )
| |
| | | | | | |
237
PL/I
| | | | | | | | | | | | | |
DECLARE DCL
Notes: 1. You can specify host variable array attributes in any order that is acceptable to PL/I. For example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable. 2. You can specify the scale for only DECIMAL FIXED. 3. dimension must be an integer constant between 1 and 32767. Example: The following example shows a declaration of an indicator array:
DCL IND_ARRAY(100) BIN FIXED(15); /* DCL ARRAY of 100 indicator variables */
Character host variable arrays: Figure 108 shows the syntax for declarations of character host variable arrays, other than CLOBs. See Figure 110 on page 239 for the syntax of CLOBs.
variable-name , ( , ( variable-name )
dimension )
| |
CHARACTER CHAR
| | | | Figure 108. Character host variable arrays | Notes: | 1. dimension must be an integer constant between 1 and 32767. | | | | | | | | | Example: The following example shows the declarations needed to retrieve 10 rows of the department number and name from the department table:
DCL DEPTNO(10) DCL DEPTNAME(10) CHAR(3); CHAR(29) VAR; /* Array of ten CHAR(3) variables */ /* Array of ten VARCHAR(29) variables */
Graphic host variable arrays: Figure 109 on page 239 shows the syntax for declarations of graphic host variable arrays, other than DBCLOBs. See Figure 110 on page 239 for the syntax of DBCLOBs.
238
PL/I
| |
DECLARE DCL
variable-name , ( , ( variable-name )
( dimension )
variable-name ( dimension )
| | | | | | | | | | | | | | | |
GRAPHIC (
Notes: 1. dimension must be an integer constant between 1 and 32767. LOB variable arrays and locators: Figure 110 shows the syntax for declarations of BLOB, CLOB, and DBCLOB host variable arrays and locators. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
DCL DECLARE
variable-name , ( , ( variable-name )
( dimension )
SQL TYPE IS
| |
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR
| | | | | | | | | | |
Notes: 1. dimension must be an integer constant between 1 and 32767. ROWIDs: Figure 111 on page 240 shows the syntax for declarations of ROWID variable arrays. See Chapter 14, Programming for large objects, on page 297 for a discussion of how to use these host variables.
239
PL/I
DCL DECLARE
variable-name , ( , ( variable-name )
dimension )
variable-name ( dimension )
| | | | Figure 111. ROWID variable arrays | Notes: | 1. dimension must be an integer constant between 1 and 32767. |
In this example, B is the name of a host structure consisting of the scalars C1 and C2. You can use the structure name as shorthand notation for a list of scalars. You can qualify a host variable with a structure name (for example, STRUCTURE.FIELD). Host structures are limited to two levels. You can think of a host structure for DB2 data as a named group of host variables. You must terminate the host structure variable by ending the declaration with a semicolon. For example:
DCL 1 A, 2 B CHAR, 2 (C, D) CHAR; DCL (E, F) CHAR;
You can specify host variable attributes in any order that is acceptable to PL/I. For example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable. Figure 112 on page 241 shows the syntax for declarations of host structures.
240
PL/I
DECLARE DCL
Figure 113 shows the syntax for data types that are used within declarations of host structures.
FIXED ( FLOAT ( ( integer ) precision ) VARYING VARY VARYING VARY precision , scale )
Figure 114 shows the syntax for LOB data types that are used within declarations of host structures.
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR
( length K M G
241
PL/I
Table 20. SQL data types the precompiler uses for PL/I declarations PL/I data type BIN FIXED(n) 1<=n<=15 BIN FIXED(n) 16<=n<=31 DEC FIXED(p,s) 0<=p<=31 and 0<=s<=p1 BIN FLOAT(p) 1<=p<=21 BIN FLOAT(p) 22<=p<=53 DEC FLOAT(m) 1<=m<=6 DEC FLOAT(m) 7<=m<=16 CHAR(n) CHAR(n) VARYING 1<=n<=255 CHAR(n) VARYING n>255 GRAPHIC(n) GRAPHIC(n) VARYING 1<=n<=127 GRAPHIC(n) VARYING n>127 SQL TYPE IS RESULT_SET_LOCATOR SQL TYPE IS TABLE LIKE table-name AS LOCATOR SQL TYPE IS BLOB_LOCATOR SQL TYPE IS CLOB_LOCATOR SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BLOB(n) 1n2147483647 SQL TYPE IS CLOB(n) 1n2147483647 SQL TYPE IS DBCLOB(n) 1n10737418233 SQL TYPE IS ROWID Notes: 1. If p=0, DB2 interprets it as DECIMAL(31). For example, DB2 interprets a PL/I data type of DEC FIXED(0,0) to be DECIMAL(31,0), which equates to the SQL data type of DECIMAL(31,0). 2. Do not use this data type as a column type. 3. n is the number of double-byte characters. SQLTYPE of host variable 500 496 484 480 480 480 480 452 448 456 468 464 472 972 976 960 964 968 404 408 412 904 SQLLEN of host variable 2 4 p in byte 1, s in byte 2 4 8 4 8 n n n n n n 4 4 4 4 4 n n n 40 SQL data type SMALLINT INTEGER DECIMAL(p,s) REAL or FLOAT(n) 1<=n<=21 DOUBLE PRECISION or FLOAT(n) 22<=n<=53 FLOAT (single precision) FLOAT (double precision) CHAR(n) VARCHAR(n) VARCHAR(n) GRAPHIC(n) VARGRAPHIC(n) VARGRAPHIC(n) Result set locator2 Table locator2 BLOB locator2 CLOB locator2 DBCLOB locator2 BLOB(n) CLOB(n) DBCLOB(n)3 ROWID
Table 21 on page 243 helps you define host variables that receive output from the database. You can use the table to determine the PL/I data type that is equivalent to a given SQL data type. For example, if you retrieve TIMESTAMP data, you can use the table to define a suitable host variable in the program that receives the data value.
242
PL/I
Table 21 shows direct conversions between DB2 data types and host data types. However, a number of DB2 data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 does conversions between those compatible data types. See Table 1 on page 5 for information on compatible data types.
Table 21. SQL data types mapped to typical PL/I declarations SQL data type SMALLINT INTEGER DECIMAL(p,s) or NUMERIC(p,s) PL/I equivalent BIN FIXED(n) BIN FIXED(n) If p<16: DEC FIXED(p) or DEC FIXED(p,s) Notes 1<=n<=15 16<=n<=31 p is precision; s is scale. 1<=p<=31 and 0<=s<=p If p>15, the PL/I compiler must support 31-digit decimal variables. REAL or FLOAT(n) DOUBLE PRECISION, DOUBLE, or FLOAT(n) CHAR(n) VARCHAR(n) GRAPHIC(n) BIN FLOAT(p) or DEC FLOAT(m) BIN FLOAT(p) or DEC FLOAT(m) CHAR(n) CHAR(n) VAR GRAPHIC(n) n refers to the number of double-byte characters, not to the number of bytes. 1<=n<=127 n refers to the number of double-byte characters, not to the number of bytes. If you are using a date exit routine, that routine determines n; otherwise, n must be at least 10. If you are using a time exit routine, that routine determines n. Otherwise, n must be at least 6; to include seconds, n must be at least 8. n must be at least 19. To include microseconds, n must be 26; if n is less than 26, the microseconds part is truncated. Use this data type only for receiving result sets. Do not use this data type as a column type. Use this data type only in a user-defined function or stored procedure to receive rows of a transition table. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. 1<=n<=21, 1<=p<=21, and 1<=m<=6 22<=n<=53, 22<=p<=53, and 7<=m<=16 1<=n<=255
VARGRAPHIC(n) DATE
TIME
CHAR(n)
TIMESTAMP
CHAR(n)
Table locator
BLOB locator
CLOB locator
DBCLOB locator
243
PL/I
Table 21. SQL data types mapped to typical PL/I declarations (continued) SQL data type BLOB(n) CLOB(n) DBCLOB(n) ROWID PL/I equivalent SQL TYPE IS BLOB(n) SQL TYPE IS CLOB(n) SQL TYPE IS DBCLOB(n) SQL TYPE IS ROWID Notes 1n2147483647 1n2147483647 n is the number of double-byte characters. 1n1073741823
| | | |
244
PL/I
PL/I scoping rules: The precompiler does not support PL/I scoping rules. Overflow: Be careful of overflow. For example, if you retrieve an INTEGER column value into a BIN FIXED(15) host variable and the column value is larger than 32767 or smaller than -32768, you get an overflow warning or an error, depending on whether you provided an indicator variable. Truncation: Be careful of truncation. For example, if you retrieve an 80-character CHAR column value into a CHAR(70) host variable, the rightmost ten characters of the retrieved string are truncated. Retrieving a double-precision floating-point or decimal column value into a BIN FIXED(31) host variable removes any fractional part of the value. Similarly, retrieving a column value with a DECIMAL data type into a PL/I decimal variable with a lower precision might truncate the value.
245
PL/I
Use a SET assignment statement to assign a GRAPHIC or VARGRAPHIC transition variable to a DBCLOB locator host variable. Use a VALUES INTO statement to assign a GRAPHIC or VARGRAPHIC function parameter to a DBCLOB locator host variable. However, you cannot use a FETCH statement to assign a value in a GRAPHIC or VARGRAPHIC column to a DBCLOB locator host variable. Datetime data types are compatible with character host variables. A DATE, TIME, or TIMESTAMP column is compatible with a fixed-length or varying-length PL/I character host variable. A BLOB column or a BLOB locator is compatible only with a BLOB host variable. The ROWID column is compatible only with a ROWID host variable. A host variable is compatible with a distinct type if the host variable type is compatible with the source type of the distinct type. For information on assigning and comparing distinct types, see Chapter 16, Creating and using distinct types, on page 367.
v v v
When necessary, DB2 automatically converts a fixed-length string to a varying-length string, or a varying-length string to a fixed-length string.
246
PL/I
DCL DCL DCL DCL DCL CLS_CD DAY BGN END (DAY_IND, CHAR(7); BIN FIXED(15); CHAR(8); CHAR(8); BGN_IND, END_IND)
BIN FIXED(15);
You can specify host variable attributes in any order that is acceptable to PL/I. For example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable. Figure 115 shows the syntax for declarations of indicator variables.
Declaring indicator arrays: Figure 116 shows the syntax for declarations of indicator arrays.
DECLARE DCL
variable-name ( , (
dimension )
BINARY BIN ) ;
variable-name ( dimension )
DSNTIAR syntax CALL DSNTIAR ( sqlca, message, lrecl ); The DSNTIAR parameters have the following meanings:
Chapter 9. Embedding SQL statements in host languages
247
PL/I
sqlca An SQL communication area. message An output area, in VARCHAR format, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240. The output lines of text, each line being the length specified in lrecl, are put into this area. For example, you could specify the format of the output area as:
DCL DATA_LEN FIXED BIN(31) INIT(132); DCL DATA_DIM FIXED BIN(31) INIT(10); DCL 1 ERROR_MESSAGE AUTOMATIC, 3 ERROR_LEN FIXED BIN(15) UNAL INIT((DATA_LEN*DATA_DIM)), 3 ERROR_TEXT(DATA_DIM) CHAR(DATA_LEN); . . . CALL DSNTIAR ( SQLCA, ERROR_MESSAGE, DATA_LEN );
where ERROR_MESSAGE is the name of the message output area, DATA_DIM is the number of lines in the message output area, and DATA_LEN is the length of each line. lrecl A fullword containing the logical record length of output messages, between 72 and 240. Because DSNTIAR is an assembler language program, you must include the following directives in your PL/I application:
DCL DSNTIAR ENTRY OPTIONS (ASM,INTER,RETCODE);
An example of calling DSNTIAR from an application appears in the DB2 sample assembler program DSN8BP3, contained in the library DSN8810.SDSNSAMP. See Appendix B, Sample applications, on page 1013 for instructions on how to access and print the source code for the sample program.
248
PL/I
CICS If your CICS application requires CICS storage handling, you must use the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC (eib, commarea, sqlca, msg, lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that use CICS commands. eib EXEC interface block
commarea communication area For more information on these parameters, see the appropriate application programming guide for CICS. The remaining parameter descriptions are the same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA in the same way. You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you must also define them in the CSD. For an example of CSD entry generation statements for use with DSNTIAC, see job DSNTEJ5A. The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
249
REXX
See Appendix D of DB2 SQL Reference for information on the fields in the REXX SQLCA.
'subsystem-ID' REXX-variable
Note: CALL SQLDBS ATTACH TO ssid is equivalent to ADDRESS DSNREXX CONNECT ssid.
EXECSQL Executes SQL statements in REXX procedures. The syntax of EXECSQL is:
250
REXX
"SQL-statement" REXX-variable
1. CALL SQLEXEC is equivalent to EXECSQL. 2. EXECSQL can be enclosed in single or double quotation marks.
See Embedding SQL statements in a REXX procedure on page 252 for more information. DISCONNECT Disconnects the REXX procedure from a DB2 subsystem. You should execute DISCONNECT to release resources that are held by DB2. The syntax of DISCONNECT is:
These application programming interfaces are available through the DSNREXX host command environment. To make DSNREXX available to the application, invoke the RXSUBCOM function. The syntax is:
RXSUBCOM (
ADD DELETE
, DSNREXX
, DSNREXX
The ADD function adds DSNREXX to the REXX host command environment table. The DELETE function deletes DSNREXX from the REXX host command environment table. Figure 117 shows an example of REXX code that makes DSNREXX available to an application.
SUBCOM DSNREXX /* HOST CMD ENV AVAILABLE? IF RC THEN /* IF NOT, MAKE IT AVAILABLE S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) /* ADD HOST CMD ENVIRONMENT ADDRESS DSNREXX /* SEND ALL COMMANDS OTHER /* THAN REXX INSTRUCTIONS TO /* DSNREXX /* CALL CONNECT, EXECSQL, AND /* DISCONNECT INTERFACES . . . S_RC = RXSUBCOM(DELETE,DSNREXX,DSNREXX) /* WHEN DONE WITH /* DSNREXX, REMOVE IT. Figure 117. Making DSNREXX available to an application */ */ */ */ */ */ */ */
*/ */
251
REXX
You cannot execute a SELECT, INSERT, UPDATE, or DELETE statement that contains host variables. Instead, you must execute PREPARE on the statement, with parameter markers substituted for the host variables, and then use the host variables in an EXECUTE, OPEN, or FETCH statement. See Using REXX host variables and data types on page 254 for more information. An SQL statement follows rules that apply to REXX commands. The SQL statement can optionally end with a semicolon and can be enclosed in single or double quotation marks, as in the following example:
EXECSQL COMMIT;
Comments: You cannot include REXX comments (/* ... */) or SQL comments (--) within SQL statements. However, you can include REXX comments anywhere else in the procedure. Continuation for SQL statements: SQL statements that span lines follow REXX rules for statement continuation. You can break the statement into several strings, each of which fits on a line, and separate the strings with commas or with concatenation operators followed by commas. For example, either of the following statements is valid:
EXECSQL , "UPDATE DSN8810.DEPT" , "SET MGRNO = 000010" , "WHERE DEPTNO = D11" "EXECSQL " || , " UPDATE DSN8810.DEPT " || , " SET MGRNO = 000010" || , " WHERE DEPTNO = D11"
Including code: The EXECSQL INCLUDE statement is not valid for REXX. You therefore cannot include externally defined SQL statements in a procedure.
252
REXX
Margins: Like REXX commands, SQL statements can begin and end anywhere on a line. Names: You can use any valid REXX name that does not end with a period as a host variable. However, host variable names should not begin with SQL, RDI, DSN, RXSQL, or QRW. Variable names can be at most 64 bytes. Nulls: A REXX null value and an SQL null value are different. The REXX language has a null string (a string of length 0) and a null clause (a clause that contains only blanks and comments). The SQL null value is a special value that is distinct from all nonnull values and denotes the absence of a value. Assigning a REXX null value to a DB2 column does not make the column value null. Statement labels: You can precede an SQL statement with a label, in the same way that you label REXX commands. Handling errors and warnings: DB2 does not support the SQL WHENEVER statement in a REXX procedure. To handle SQL errors and warnings, use the following methods: v To test for SQL errors or warnings, test the SQLCODE or SQLSTATE value and the SQLWARN. values after each EXECSQL call. This method does not detect errors in the REXX interface to DB2. v To test for SQL errors or warnings or errors or warnings from the REXX interface to DB2, test the REXX RC variable after each EXECSQL call. Table 22 lists the values of the RC variable. You can also use the REXX SIGNAL ON ERROR and SIGNAL ON FAILURE keyword instructions to detect negative values of the RC variable and transfer control to an error routine.
Table 22. REXX return codes after SQL statements Return code 0 +1 -1 Meaning No SQL warning or error occurred. An SQL warning occurred. An SQL error occurred. The first token after ADDRESS DSNREXX is in error. For a description of the tokens allowed, see Accessing the DB2 REXX Language Support application programming interfaces on page 250.
| | |
-3
253
REXX
s1 to s100 Prepared statement names for DECLARE STATEMENT, PREPARE, DESCRIBE, and EXECUTE statements. Use only the predefined names for cursors and statements. When you associate a cursor name with a statement name in a DECLARE CURSOR statement, the cursor name and the statement must have the same number. For example, if you declare cursor c1, you need to declare it for statement s1:
EXECSQL DECLARE C1 CURSOR FOR S1
254
REXX
Table 23. SQL input data types and REXX data formats (continued) SQL data type assigned by DB2 DECIMAL(p,s) SQLTYPE for data type 484/485 REXX input data format One of the following formats: v A string of numerics that contains a decimal point but no exponent identifier. p represents the precision and s represents the scale of the decimal number that the string represents. The first character can be a plus (+) or minus () sign. v A string of numerics that does not contain a decimal point or an exponent identifier. The first character can be a plus (+) or minus () sign. The number that is represented is less than -2147483647 or greater than 2147483647. FLOAT 480/481 A string that represents a number in scientific notation. The string consists of a series of numerics followed by an exponent identifier (an E or e followed by an optional plus (+) or minus () sign and a series of numerics). The string can begin with a plus (+) or minus () sign. One of the following formats: v A string of length n, enclosed in single or double quotation marks. v The character X or x, followed by a string enclosed in single or double quotation marks. The string within the quotation marks has a length of 2*n bytes and is the hexadecimal representation of a string of n characters. v A string of length n that does not have a numeric or graphic format, and does not satisfy either of the previous conditions. VARGRAPHIC(n) 464/465 One of the following formats: v The character G, g, N, or n, followed by a string enclosed in single or double quotation marks. The string within the quotation marks begins with a shift-out character (X'0E') and ends with a shift-in character (X'0F'). Between the shift-out character and shift-in character are n double-byte characters. v The characters GX, Gx, gX, or gx, followed by a string enclosed in single or double quotation marks. The string within the quotation marks has a length of 4*n bytes and is the hexadecimal representation of a string of n double-byte characters.
VARCHAR(n)
448/449
For example, when DB2 executes the following statements to update the MIDINIT column of the EMP table, DB2 must determine a data type for HVMIDINIT:
SQLSTMT="UPDATE EMP" , "SET MIDINIT = ?" , "WHERE EMPNO = 000200" "EXECSQL PREPARE S100 FROM :SQLSTMT" HVMIDINIT=H "EXECSQL EXECUTE S100 USING" , ":HVMIDINIT"
Because the data that is assigned to HVMIDINIT has a format that fits a character data type, DB2 REXX Language Support assigns a VARCHAR type to the input data.
255
REXX
Enclosing the string in apostrophes is not adequate because REXX removes the apostrophes when it assigns a literal to a variable. For example, suppose that you want to pass the value in host variable stringvar to DB2. The value that you want to pass is the string 100. The first thing that you need to do is to assign the string to the host variable. You might write a REXX command like this:
stringvar = 100
After the command executes, stringvar contains the characters 100 (without the apostrophes). DB2 REXX Language Support then passes the numeric value 100 to DB2, which is not what you intended. However, suppose that you write the command like this:
stringvar = ""100""
In this case, REXX assigns the string 100 to stringvar, including the single quotation marks. DB2 REXX Language Support then passes the string 100 to DB2, which is the desired result.
INSQLDA.1.SQLLEN = 1 INSQLDA.1.SQLDATA = H INSQLDA.1.SQLIND = 0 SQLSTMT="UPDATE EMP" , "SET MIDINIT = ?" , "WHERE EMPNO = 000200" "EXECSQL PREPARE S100 FROM :SQLSTMT" "EXECSQL EXECUTE S100 USING DESCRIPTOR :INSQLDA"
Example: Specifying DECIMAL with precision and scale: Suppose you want to tell DB2 that the data is of type DECIMAL with precision and nonzero scale. You need to set up an SQLDA that contains a description of a DECIMAL column:
INSQLDA.SQLD = 1 INSQLDA.1.SQLTYPE = 484 INSQLDA.1.SQLLEN.SQLPRECISION = 18 INSQLDA.1.SQLLEN.SQLSCALE = 8 INSQLDA.1.SQLDATA = 9876543210.87654321 /* /* /* /* /* SQLDA contains one variable Type of variable is DECIMAL Precision of variable is 18 Scale of variable is 8 Value in variable */ */ */ */ */
256
REXX
Table 24. SQL output data types and REXX data formats SQL data type SMALLINT INTEGER DECIMAL(p,s) REXX output data format A string of numerics that does not contain leading zeroes, a decimal point, or an exponent identifier. If the string represents a negative number, it begins with a minus () sign. The numeric value is between -2147483647 and 2147483647, inclusive. A string of numerics with one of the following formats: v Contains a decimal point but not an exponent identifier. The string is padded with zeroes to match the scale of the corresponding table column. If the value represents a negative number, it begins with a minus () sign. v Does not contain a decimal point or an exponent identifier. The numeric value is less than -2147483647 or greater than 2147483647. If the value is negative, it begins with a minus () sign. FLOAT(n) REAL DOUBLE A string that represents a number in scientific notation. The string consists of a numeric, a decimal point, a series of numerics, and an exponent identifier. The exponent identifier is an E followed by a minus () sign and a series of numerics if the number is between -1 and 1. Otherwise, the exponent identifier is an E followed by a series of numerics. If the string represents a negative number, it begins with a minus () sign. A character string of length n bytes. The string is not enclosed in single or double quotation marks. A string of length 2*n bytes. Each pair of bytes represents a double-byte character. This string does not contain a leading G, is not enclosed in quotation marks, and does not contain shift-out or shift-in characters.
Because you cannot use the SELECT INTO statement in a REXX procedure, to retrieve data from a DB2 table you must prepare a SELECT statement, open a cursor for the prepared statement, and then fetch rows into host variables or an SQLDA using the cursor. The following example demonstrates how you can retrieve data from a DB2 table using an SQLDA:
SQLSTMT= , SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, , WORKDEPT, PHONENO, HIREDATE, JOB, , EDLEVEL, SEX, BIRTHDATE, SALARY, , BONUS, COMM , FROM EMP EXECSQL DECLARE C1 CURSOR FOR S1 EXECSQL PREPARE S1 INTO :OUTSQLDA FROM :SQLSTMT EXECSQL OPEN C1 Do Until(SQLCODE = 0) EXECSQL FETCH C1 USING DESCRIPTOR :OUTSQLDA If SQLCODE = 0 Then Do Line = Do I = 1 To OUTSQLDA.SQLD Line = Line OUTSQLDA.I.SQLDATA End I Say Line End End
257
REXX
other languages. When you want to pass a null value to a DB2 column, in addition to putting a negative value in an indicator variable, you also need to put a valid value in the corresponding host variable. For example, to set a value of WORKDEPT in table EMP to null, use statements like these:
SQLSTMT="UPDATE EMP" , "SET WORKDEPT = ?" HVWORKDEPT=000 INDWORKDEPT=-1 "EXECSQL PREPARE S100 FROM :SQLSTMT" "EXECSQL EXECUTE S100 USING :HVWORKDEPT :INDWORKDEPT"
After you retrieve data from a column that can contain null values, you should always check the indicator variable that corresponds to the output host variable for that column. If the indicator variable value is negative, the retrieved value is null, so you can disregard the value in the host variable. In the following program, the phone number for employee Haas is selected into variable HVPhone. After the SELECT statement executes, if no phone number for employee Haas is found, indicator variable INDPhone contains -1.
SUBCOM DSNREXX IF RC THEN , S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) ADDRESS DSNREXX CONNECT DSN SQLSTMT = , "SELECT PHONENO FROM DSN8810.EMP WHERE LASTNAME=HAAS" "EXECSQL DECLARE C1 CURSOR FOR S1" "EXECSQL PREPARE S1 FROM :SQLSTMT" Say "SQLCODE from PREPARE is "SQLCODE "EXECSQL OPEN C1" Say "SQLCODE from OPEN is "SQLCODE "EXECSQL FETCH C1 INTO :HVPhone :INDPhone" Say "SQLCODE from FETCH is "SQLCODE If INDPhone < 0 Then , Say Phone number for Haas is null. "EXECSQL CLOSE C1" Say "SQLCODE from CLOSE is "SQLCODE S_RC = RXSUBCOM(DELETE,DSNREXX,DSNREXX)
To change the isolation level for SQL statements in a REXX procedure, execute the SET CURRENT PACKAGESET statement to select the package with the isolation level you need. For example, to change the isolation level to cursor stability, execute this SQL statement:
"EXECSQL SET CURRENT PACKAGESET=DSNREXCS"
258
Using check constraints makes your programming task easier, because you do not need to enforce those constraints within application programs or with a validation routine. Define check constraints on one or more columns in a table when that table is created or altered.
259
A check constraint is not checked for consistency with other types of constraints. For example, a column in a dependent table can have a referential constraint with a delete rule of SET NULL. You can also define a check constraint that prohibits nulls in the column. As a result, an attempt to delete a parent row fails, because setting the dependent row to null violates the check constraint. Similarly, a check constraint is not checked for consistency with a validation routine, which is applied to a table before a check constraint. If the routine requires a column to be greater than or equal to 10 and a check constraint requires the same column to be less than 10, table inserts are not possible. Plans and packages do not need to be rebound after check constraints are defined on or removed from a table.
260
v The LOAD utility is run with CONSTRAINTS NO, and check constraints are defined on the table. v CHECK DATA is run on a table that contains violations of check constraints. v A point-in-time RECOVER introduces violations of check constraints.
CASCADE DEPT SET NULL RESTRICT EMP RESTRICT CASCADE PROJ RESTRICT PROJACT RESTRICT EMPPROJACT ACT RESTRICT SET NULL
RESTRICT
When a table refers to an entity for which there is a master list, it should identify an occurrence of the entity that actually appears in the master list; otherwise, either the reference is invalid or the master list is incomplete. Referential constraints enforce the relationship between a table and a master list.
261
In some cases, using a timestamp as part of the key can be helpful, for example when a table does not have a natural unique key or if arrival sequence is the key. Primary keys for some of the sample tables are: Table Employee table Department table Project table Key Column EMPNO DEPTNO PROJNO
Table 25 shows part of the project table which has the primary key column, PROJNO.
Table 25. Part of the project table with the primary key column, PROJNO PROJNO MA2100 MA2110 PROJNAME WELD LINE AUTOMATION W L PROGRAMMING DEPTNO D01 D11
Table 26 shows part of the project activity table, which has a primary key that contains more than one column. The primary key is a composite key, which consists of the PRONNO, ACTNO, and ACSTDATE columns.
Table 26. Part of the Project activities table with a composite primary key PROJNO AD3100 AD3110 AD3111 ACTNO 10 10 60 ACSTAFF .50 1.00 .50 ACSTDATE 1982-01-01 1982-01-01 1982-03-15 ACENDATE 1982-07-01 1983-01-01 1982-04-15
262
A table can have no more than one primary key. A primary key obeys the same restrictions as do index keys: v The key can include no more than 64 columns. v No column can be named twice. v The sum of the column length attributes cannot be greater than 2000. You define a list of columns as the primary key of a table with the PRIMARY KEY clause in the CREATE TABLE statement. To add a primary key to an existing table, use the PRIMARY KEY clause in an ALTER TABLE statement. In this case, a unique index must already exist.
Incomplete definition
If a table is created with a primary key, its primary index is the first unique index created on its primary key columns, with the same order of columns as the primary key columns. The columns of the primary index can be in either ascending or descending order. The table has an incomplete definition until you create an index on the parent key. This incomplete definition status is recorded as a P in the TABLESTATUS column of SYSIBM.SYSTABLES. Use of a table with an incomplete definition is severely restricted: you can drop the table, create the primary index, and drop or create other indexes; you cannot load the table, insert data, retrieve data, update data, delete data, or create foreign keys that reference the primary key. Because of these restrictions, plan to create the primary index soon after creating the table. For example, to create the primary index for the project activity table, issue:
CREATE UNIQUE INDEX XPROJAC1 ON DSN8810.PROJACT (PROJNO, ACTNO, ACSTDATE);
Creating the primary index resets the incomplete definition status and its associated restrictions. But if you drop the primary index, it reverts to incomplete definition status; to reset the status, you must create the primary index or alter the table to drop the primary key. If the primary key is added later with ALTER TABLE, a unique index on the key columns must already exist. If more than one unique index is on those columns, DB2 chooses one arbitrarily to be the primary index.
263
v A view that can be updated that is defined on a table with a primary key should include all columns of the key. Although this is necessary only if the view is used for inserts, the unique identification of rows can be useful if the view is used for updates, deletes, or selects. v Drop a primary key later if you change your database or application using SQL.
The name is used in error messages, queries to the catalog, and DROP FOREIGN KEY statements. Hence, you might want to choose one if you are experimenting with your database design and have more than one foreign key beginning with the same column (otherwise DB2 generates the name).
264
You can create an index on the columns of a foreign key in the same way you create one on any other set of columns. Most often it is not a unique index. If you do create a unique index on a foreign key, it introduces an additional constraint on the values of the columns. To let an index on the foreign key be used on the dependent table for a delete operation on a parent table, the columns of the index on the foreign key must be identical to and in the same order as the columns in the foreign key. A foreign key can also be the primary key; then the primary index is also a unique index on the foreign key. In that case, every row of the parent table has at most one dependent row. The dependent table might be used to hold information that pertains to only a few of the occurrences of the entity described by the parent table. For example, a dependent of the employee table might contain information that applies only to employees working in a different country. The primary key can share columns of the foreign key if the first n columns of the foreign key are the same as the primary keys columns. Again, the primary index serves as an index on the foreign key. In the sample project activity table, the primary index (on PROJNO, ACTNO, ACSTDATE) serves as an index on the foreign key on PROJNO. It does not serve as an index on the foreign key on ACTNO, because ACTNO is not the first column of the index.
| | |
265
Valid cycle
Invalid cycle
TABLE1 CASCADE
CASCADE
TABLE2
SET NULL
TABLE3
TABLE2
SET NULL
TABLE3
Alternatively, a delete operation on a self-referencing table must involve the same table, and the delete rule there must be CASCADE or NO ACTION. Recommendation: Avoid creating a cycle in which all the delete rules are RESTRICT and none of the foreign keys allows nulls. If you do this, no row of any of the tables can ever be deleted. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
266
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Refer to Part 3 (Volume 1) of DB2 Administration Guide for more information about multilevel security with row-level granularity.
267
268
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
269
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v If you define the column as GENERATED ALWAYS (which is the default), DB2 always generates a unique value for the column. You cannot insert data into that column. In this case, DB2 does not require an index to guarantee unique values. For more information, see Inserting data into a ROWID column on page 30.
The following code uses the SELECT from INSERT statement to retrieve the value of the ROWID column from a new row that is inserted into the EMPLOYEE table. This value is then used to reference that row for the update of the SALARY column.
EXEC SQL BEGIN DECLARE SECTION; SQL TYPE IS ROWID hv_emp_rowid; short hv_dept, hv_empno; char hv_name[30]; decimal(7,2) hv_salary; EXEC SQL END DECLARE SECTION; ... EXEC SQL SELECT EMP_ROWID INTO :hv_emp_rowid FROM FINAL TABLE (INSERT INTO EMPLOYEE VALUES (DEFAULT, :hv_empno, :hv_name, :hv_salary, :hv_dept)); EXEC SQL UPDATE EMPLOYEE SET SALARY = SALARY + 1200 WHERE EMP_ROWID = :hv_emp_rowid; EXEC SQL COMMIT;
For DB2 to be able to use direct row access for the update operation, the SELECT from INSERT statement and the UPDATE statement must execute within the same unit of work. If these statements execute in different units of work, the ROWID value for the inserted row might change due to a REORG of the table space before the update operation. For more information about predicates and direct row access, see Is direct row access possible? (PRIMARY_ACCESSTYPE = D) on page 803.
270
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
You can use identity columns for primary keys that are typically unique sequential numbers, for example, order numbers or employee numbers. By doing so, you can avoid the concurrency problems that can result when an application generates its own unique counter outside the database. Recommendation: Set the values of the foreign keys in the dependent tables after loading the parent table. If you use an identity column as a parent key in a referential integrity structure, loading data into that structure could be quite complicated. The values for the identity column are not known until the table is loaded because the column is defined as GENERATED ALWAYS. You v If v If v If might have gaps in identity column values for the following reasons: other applications are inserting values into the same identity column DB2 terminates abnormally before it assigns all the cached values your application rolls back a transaction that inserts identity values
Now suppose that you execute the following INSERT statement eight times:
INSERT INTO T1 (CHARCOL1) VALUES (A);
271
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
When DB2 generates values for IDENTCOL1, it starts with -1 and increments by 1 until it reaches the MAXVALUE of 3 on the fifth INSERT. To generate the value for the sixth INSERT, DB2 cycles back to MINVALUE, which is -3. T1 looks like this after the eight INSERTs are executed:
CHARCOL1 ======== A A A A A A A A IDENTCOL1 ========= -1 0 1 2 3 -3 -2 -1
The value of IDENTCOL1 for the eighth INSERT repeats the value of IDENTCOL1 for the first INSERT.
When you insert a new employee into the EMPLOYEE table, to retrieve the value for the EMPNO column, you can use the following SELECT from INSERT statement:
EXEC SQL SELECT EMPNO INTO :hv_empno FROM FINAL TABLE (INSERT INTO EMPLOYEE (NAME, SALARY, WORKDEPT) VALUES (New Employee, 75000.00, 11));
272
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The SELECT statement returns the DB2-generated identity value for the EMPNO column in the host variable :hv_empno. You can then use the value in :hv_empno to update the MGRNO column in the DEPARTMENT table with the new employee as the department manager:
EXEC SQL UPDATE DEPARTMENT SET MGRNO = :hv_empno WHERE DEPTNO = 11;
Example: Using IDENTITY_VAL_LOCAL: The following INSERT and UPDATE statements are equivalent to the INSERT and UPDATE statements of the previous example:
INSERT INTO EMPLOYEE (NAME, SALARY, WORKDEPT) VALUES (New Employee, 75000.00, 11); UPDATE DEPARTMENT SET MGRNO = IDENTITY_VAL_LOCAL() WHERE DEPTNO = 11;
The INSERT statement and the IDENTITY_VAL_LOCAL function must be at the same processing level.
273
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The values that DB2 generates for a sequence depend on how the sequence is created. The START WITH parameter determines the first value that DB2 generates. The values advance by the INCREMENT BY parameter in ascending or descending order. The MINVALUE and MAXVALUE parameters determine the minimum and maximum values that DB2 generates. The CYCLE or NO CYCLE parameter determines whether DB2 wraps values when it has generated all values between the START WITH value and MAXVALUE if the values are ascending, or between the START WITH value and MINVALUE if the values are descending.
You create a sequence named ORDER_SEQ to use as key values for both the ORDERS and ORDER_ITEMS tables:
274
| | | | | | | | | | | | | | | | | |
CREATE SEQUENCE ORDER_SEQ AS INTEGER START WITH 1 INCREMENT BY 1 NO MAXVALUE NO CYCLE CACHE 20;
You can then use the same sequence number as a primary key value for the ORDERS table and as part of the primary key value for the ORDER_ITEMS table:
INSERT INTO ORDERS (ORDERNO, CUSTNO) VALUES (NEXT VALUE FOR ORDER_SEQ, 12345); INSERT INTO ORDER_ITEMS (ORDERNO, PARTNO, QUANTITY) VALUES (PREVIOUS VALUE FOR ORDER_SEQ, 987654, 2);
The NEXT VALUE expression in the first INSERT statement generates a sequence number value for the sequence object ORDER_SEQ. The PREVIOUS VALUE expression in the second INSERT statement retrieves that same value because it was the sequence number most recently generated for that sequence object within the current application process.
275
276
277
1 CREATE TRIGGER REORDER 2 3 4 AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS 5 REFERENCING NEW AS N_ROW 6 FOR EACH ROW MODE DB2SQL 7 WHEN (N_ROW.ON_HAND < 0.10 * N_ROW.MAX_STOCKED) 8 BEGIN ATOMIC CALL ISSUE_SHIP_REQUEST(N_ROW.MAX_STOCKED N_ROW.ON_HAND, N_ROW.PARTNO); END Figure 121. Example of a trigger
The parts of this trigger are: 1 2 3 4 5 6 7 8 Trigger name (REORDER) Trigger activation time (AFTER) Triggering event (UPDATE) Subject table name (PARTS) New transition variable correlation name (N_ROW) Granularity (FOR EACH ROW) Trigger condition (WHEN...) Trigger body (BEGIN ATOMIC...END;)
When you execute this CREATE TRIGGER statement, DB2 creates a trigger package called REORDER and associates the trigger package with table PARTS. DB2 records the timestamp when it creates the trigger. If you define other triggers on the PARTS table, DB2 uses this timestamp to determine which trigger to activate first. The trigger is now ready to use. After DB2 updates columns ON_HAND or MAX_STOCKED in any row of table PARTS, trigger REORDER is activated. The trigger calls a stored procedure called ISSUE_SHIP_REQUEST if, after a row is updated, the quantity of parts on hand is less than 10% of the maximum quantity stocked. In the trigger condition, the qualifier N_ROW represents a value in a modified row after the triggering event. When you no longer want to use trigger REORDER, you can delete the trigger by executing the statement:
DROP TRIGGER REORDER;
Executing this statement drops trigger REORDER and its associated trigger package named REORDER. If you drop table PARTS, DB2 also drops trigger REORDER and its trigger package.
278
Parts of a trigger
This section gives you the information you need to code each of the trigger parts: v Trigger name v Subject table v Trigger activation time v Triggering event v Granularity on page 280 v Transition variables on page 281 v Transition tables on page 282 v Triggered action on page 283
Trigger name
Use an ordinary identifier to name your trigger. You can use a qualifier or let DB2 determine the qualifier. When DB2 creates a trigger package for the trigger, it uses the qualifier for the collection ID of the trigger package. DB2 uses these rules to determine the qualifier: v If you use static SQL to execute the CREATE TRIGGER statement, DB2 uses the authorization ID in the bind option QUALIFIER for the plan or package that contains the CREATE TRIGGER statement. If the bind command does not include the QUALIFIER option, DB2 uses the owner of the package or plan. v If you use dynamic SQL to execute the CREATE TRIGGER statement, DB2 uses the authorization ID in special register CURRENT SQLID.
Subject table
When you perform an insert, update, or delete operation on this table, the trigger is activated. You must name a local table in the CREATE TRIGGER statement. You cannot define a trigger on a catalog table or on a view.
Triggering event
Every trigger is associated with an event. A trigger is activated when the triggering event occurs in the subject table. The triggering event is one of the following SQL operations: v INSERT v UPDATE v DELETE A triggering event can also be an update or delete operation that occurs as the result of a referential constraint with ON DELETE SET NULL or ON DELETE CASCADE.
279
Triggers are not activated as the result of updates made to tables by DB2 utilities, with the exception of the LOAD utility when it is specified with the RESUME YES and SHRLEVEL CHANGE options. See DB2 Utility Guide and Reference for more information about the LOAD utility. When the triggering event for a trigger is an update operation, the trigger is called an update trigger. Similarly, triggers for insert operations are called insert triggers, and triggers for delete operations are called delete triggers. The SQL statement that performs the triggering SQL operation is called the triggering SQL statement. Each triggering event is associated with one subject table and one SQL operation. The following trigger is defined with an insert triggering event:
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END
If the triggering SQL operation is an update operation, the event can be associated with specific columns of the subject table. In this case, the trigger is activated only if the update operation updates any of the specified columns. The following trigger, PAYROLL1, which invokes user-defined function named PAYROLL_LOG, is activated only if an update operation is performed on the SALARY or BONUS column of table PAYROLL:
CREATE TRIGGER PAYROLL1 AFTER UPDATE OF SALARY, BONUS ON PAYROLL FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC VALUES(PAYROLL_LOG(USER, UPDATE, CURRENT TIME, CURRENT DATE)); END
Granularity
The triggering SQL statement might modify multiple rows in the table. The granularity of the trigger determines whether the trigger is activated only once for the triggering SQL statement or once for every row that the SQL statement modifies. The granularity values are: v FOR EACH ROW The trigger is activated once for each row that DB2 modifies in the subject table. If the triggering SQL statement modifies no rows, the trigger is not activated. However, if the triggering SQL statement updates a value in a row to the same value, the trigger is activated. For example, if an UPDATE trigger is defined on table COMPANY_STATS, the following SQL statement will activate the trigger.
UPDATE COMPANY_STATS SET NBEMP = NBEMP;
v FOR EACH STATEMENT The trigger is activated once when the triggering SQL statement executes. The trigger is activated even if the triggering SQL statement modifies no rows. Triggers with a granularity of FOR EACH ROW are known as row triggers. Triggers with a granularity of FOR EACH STATEMENT are known as statement triggers. Statement triggers can only be after triggers. The following statement is an example of a row trigger:
280
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END
Trigger NEW_HIRE is activated once for every row inserted into the employee table.
Transition variables
When you code a row trigger, you might need to refer to the values of columns in each updated row of the subject table. To do this, specify transition variables in the REFERENCING clause of your CREATE TRIGGER statement. The two types of transition variables are: v Old transition variables, specified with the OLD transition-variable clause, capture the values of columns before the triggering SQL statement updates them. You can define old transition variables for update and delete triggers. v New transition variables, specified with the NEW transition-variable clause, capture the values of columns after the triggering SQL statement updates them. You can define new transition variables for update and insert triggers. The following example uses transition variables and invocations of the IDENTITY_VAL_LOCAL function to access values that are assigned to identity columns. Suppose that you have created tables T and S, with the following definitions:
CREATE TABLE T (ID SMALLINT GENERATED BY DEFAULT AS IDENTITY (START WITH 100), C2 SMALLINT, C3 SMALLINT, C4 SMALLINT); CREATE TABLE S (ID SMALLINT GENERATED ALWAYS AS IDENTITY, C1 SMALLINT);
Define a before insert trigger on T that uses the IDENTITY_VAL_LOCAL built-in function to retrieve the current value of identity column ID, and uses transition variables to update the other columns of T with the identity column value.
CREATE TRIGGER TR1 NO CASCADE BEFORE INSERT ON T REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL BEGIN ATOMIC SET N.C3 =N.ID; SET N.C4 =IDENTITY_VAL_LOCAL(); SET N.ID =N.C2 *10; SET N.C2 =IDENTITY_VAL_LOCAL(); END
This statement inserts a row into S with a value of 5 for column C1 and a value of 1 for identity column ID. Next, suppose that you execute the following SQL statement, which activates trigger TR1:
INSERT INTO T (C2) VALUES (IDENTITY_VAL_LOCAL());
Chapter 12. Using triggers for active data
281
This insert statement, and the subsequent activation of trigger TR1, have the following results: v The INSERT statement obtains the most recent value that was assigned to an identity column (1), and inserts that value into column C2 of table T. 1 is the value that DB2 inserted into identity column ID of table S. v When the INSERT statement executes, DB2 inserts the value 100 into identity column ID column of C2. v The first statement in the body of trigger TR1 inserts the value of transition variable N.ID (100) into column C3. N.ID is the value that identity column ID contains after the INSERT statement executes. v The second statement in the body of trigger TR1 inserts the null value into column C4. By definition, the result of the IDENTITY_VAL_LOCAL function in the triggered action of a before insert trigger is the null value. v The third statement in the body of trigger TR1 inserts 10 times the value of transition variable N.C2 (10*1) into identity column ID of table T. N.C2 is the value that column C2 contains after the INSERT is executed. v The fourth statement in the body of trigger TR1 inserts the null value into column C2. By definition, the result of the IDENTITY_VAL_LOCAL function in the triggered action of a before insert trigger is the null value.
Transition tables
If you want to refer to the entire set of rows that a triggering SQL statement modifies, rather than to individual rows, use a transition table. Like transition variables, transition tables can appear in the REFERENCING clause of a CREATE TRIGGER statement. Transition tables are valid for both row triggers and statement triggers. The two types of transition tables are: v Old transition tables, specified with the OLD TABLE transition-table-name clause, capture the values of columns before the triggering SQL statement updates them. You can define old transition tables for update and delete triggers. v New transition tables, specified with the NEW TABLE transition-table-name clause, capture the values of columns after the triggering SQL statement updates them. You can define new transition variables for update and insert triggers. The scope of old and new transition table names is the trigger body. If another table exists that has the same name as a transition table, any unqualified reference to that name in the trigger body points to the transition table. To reference the other table in the trigger body, you must use the fully qualified table name. The following example uses a new transition table to capture the set of rows that are inserted into the INVOICE table:
CREATE TRIGGER LRG_ORDR AFTER INSERT ON INVOICE REFERENCING NEW TABLE AS N_TABLE FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC SELECT LARGE_ORDER_ALERT(CUST_NO, TOTAL_PRICE, DELIVERY_DATE) FROM N_TABLE WHERE TOTAL_PRICE > 10000; END
The SELECT statement in LRG_ORDER causes user-defined function LARGE_ORDER_ALERT to execute for each row in transition table N_TABLE that satisfies the WHERE clause (TOTAL_PRICE > 10000).
282
Triggered action
When a trigger is activated, a triggered action occurs. Every trigger has one triggered action, which consists of a trigger condition and a trigger body.
Trigger condition
If you want the triggered action to occur only when certain conditions are true, code a trigger condition. A trigger condition is similar to a predicate in a SELECT, except that the trigger condition begins with WHEN, rather than WHERE. If you do not include a trigger condition in your triggered action, the trigger body executes every time the trigger is activated. For a row trigger, DB2 evaluates the trigger condition once for each modified row of the subject table. For a statement trigger, DB2 evaluates the trigger condition once for each execution of the triggering SQL statement. If the trigger condition of a before trigger has a fullselect, the fullselect cannot reference the subject table. The following example shows a trigger condition that causes the trigger body to execute only when the number of ordered items is greater than the number of available items:
CREATE TRIGGER CK_AVAIL NO CASCADE BEFORE INSERT ON ORDERS REFERENCING NEW AS NEW_ORDER FOR EACH ROW MODE DB2SQL WHEN (NEW_ORDER.QUANTITY > (SELECT ON_HAND FROM PARTS WHERE NEW_ORDER.PARTNO=PARTS.PARTNO)) BEGIN ATOMIC VALUES(ORDER_ERROR(NEW_ORDER.PARTNO, NEW_ORDER.QUANTITY)); END
Trigger body
In the trigger body, you code the SQL statements that you want to execute whenever the trigger condition is true. If the trigger body consists of more than one statement, it must begin with BEGIN ATOMIC and end with END. You cannot include host variables or parameter markers in your trigger body. If the trigger body contains a WHERE clause that references transition variables, the comparison operator cannot be LIKE. The statements you can use in a trigger body depend on the activation time of the trigger. Table 27 summarizes which SQL statements you can use in which types of triggers.
Table 27. Valid SQL statements for triggers and trigger activation times SQL statement fullselect CALL SIGNAL SQLSTATE VALUES SET transition-variable INSERT DELETE (searched) Valid before activation time Yes Yes Yes Yes Yes No No Valid after activation time Yes Yes Yes Yes No Yes Yes
283
Table 27. Valid SQL statements for triggers and trigger activation times (continued) SQL statement UPDATE (searched) Valid before activation time No Valid after activation time Yes
The following list provides more detailed information about SQL statements that are valid in triggers: v fullselect, CALL, and VALUES Use a fullselect or the VALUES statement in a trigger body to conditionally or unconditionally invoke a user-defined function. Use the CALL statement to invoke a stored procedure. See Invoking stored procedures and user-defined functions from triggers on page 285 for more information on invoking user-defined functions and stored procedures from triggers. A fullselect in the trigger body of a before trigger cannot reference the subject table. v SIGNAL SQLSTATE Use the SIGNAL SQLSTATE statement in the trigger body to report an error condition and back out any changes that are made by the trigger, as well as actions that result from referential constraints on the subject table. When DB2 executes the SIGNAL SQLSTATE statement, it returns an SQLCA to the application with SQLCODE -438. The SQLCA also includes the following values, which you supply in the SIGNAL SQLSTATE statement: A 5-character value that DB2 uses as the SQLSTATE An error message that DB2 places in the SQLERRMC field In the following example, the SIGNAL SQLSTATE statement causes DB2 to return an SQLCA with SQLSTATE 75001 and terminate the salary update operation if an employees salary increase is over 20%:
CREATE TRIGGER SAL_ADJ BEFORE UPDATE OF SALARY ON EMP REFERENCING OLD AS OLD_EMP NEW AS NEW_EMP FOR EACH ROW MODE DB2SQL WHEN (NEW_EMP.SALARY > (OLD_EMP.SALARY * 1.20)) BEGIN ATOMIC SIGNAL SQLSTATE 75001 (Invalid Salary Increase - Exceeds 20%); END
v SET transition-variable Because before triggers operate on rows of a table before those rows are modified, you cannot perform operations in the body of a before trigger that directly modify the subject table. You can, however, use the SET transition-variable statement to modify the values in a row before those values go into the table. For example, this trigger uses a new transition variable to fill in todays date for the new employees hire date:
CREATE TRIGGER HIREDATE NO CASCADE BEFORE INSERT ON EMP REFERENCING NEW AS NEW_VAR FOR EACH ROW MODE DB2SQL BEGIN ATOMIC SET NEW_VAR.HIRE_DATE = CURRENT_DATE; END
284
Because you can include INSERT, DELETE (searched), and UPDATE (searched) statements in your trigger body, execution of the trigger body might cause activation of other triggers. See Trigger cascading on page 286 for more information. If any SQL statement in the trigger body fails during trigger execution, DB2 rolls back all changes that are made by the triggering SQL statement and the triggered SQL statements. However, if the trigger body executes actions that are outside of DB2s control or are not under the same commit coordination as the DB2 subsystem in which the trigger executes, DB2 cannot undo those actions. Examples of external actions that are not under DB2s control are: v Performing updates that are not under RRS commit control v Sending an electronic mail message If the trigger executes external actions that are under the same commit coordination as the DB2 subsystem under which the trigger executes, and an error occurs during trigger execution, DB2 places the application process that issued the triggering statement in a must-rollback state. The application must then execute a rollback operation to roll back those external actions. Examples of external actions that are under the same commit coordination as the triggering SQL operation are: v Executing a distributed update operation v From a user-defined function or stored procedure, executing an external action that affects an external resource manager that is under RRS commit control.
Use the VALUES statement to execute a function unconditionally; that is, once for each execution of a statement trigger or once for each row in a row trigger. In this
Chapter 12. Using triggers for active data
285
example, user-defined function PAYROLL_LOG executes every time an update operation occurs that activates trigger PAYROLL1:
CREATE TRIGGER PAYROLL1 AFTER UPDATE ON PAYROLL FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC VALUES(PAYROLL_LOG(USER, UPDATE, CURRENT TIME, CURRENT DATE)); END
To invoke a stored procedure from a trigger, use a CALL statement. The parameters of this stored procedure call must be literals, transition variables, table locators, or expressions.
Trigger cascading
An SQL operation that a trigger performs might modify the subject table or other tables with triggers, so DB2 also activates those triggers. A trigger that is activated as the result of another trigger can be activated at the same level as the original trigger or at a different level. Two triggers, A and B, are activated at different levels if trigger B is activated after trigger A is activated and completes before trigger A completes. If trigger B is activated after trigger A is activated and completes after trigger A completes, then the triggers are at the same level. For example, in these cases, trigger A and trigger B are activated at the same level: v Table X has two triggers that are defined on it, A and B. A is a before trigger and B is an after trigger. An update to table X causes both trigger A and trigger B to activate. v Trigger A updates table X, which has a referential constraint with table Y, which has trigger B defined on it. The referential constraint causes table Y to be updated, which activates trigger B. In these cases, trigger A and trigger B are activated at different levels:
286
v Trigger A is defined on table X, and trigger B is defined on table Y. Trigger B is an update trigger. An update to table X activates trigger A, which contains an UPDATE statement on table B in its trigger body. This UPDATE statement activates trigger B. v Trigger A calls a stored procedure. The stored procedure contains an INSERT statement for table X, which has insert trigger B defined on it. When the INSERT statement on table X executes, trigger B is activated. When triggers are activated at different levels, it is called trigger cascading. Trigger cascading can occur only for after triggers because DB2 does not support cascading of before triggers. To prevent the possibility of endless trigger cascading, DB2 supports only 16 levels of cascading of triggers, stored procedures, and user-defined functions. If a trigger, user-defined function, or stored procedure at the 17th level is activated, DB2 returns SQLCODE -724 and backs out all SQL changes in the 16 levels of cascading. However, as with any other SQL error that occurs during trigger execution, if any action occurs that is outside the control of DB2, that action is not backed out. You can write a monitor program that issues IFI READS requests to collect DB2 trace information about the levels of cascading of triggers, user-defined functions, and stored procedures in your programs. See Appendixes (Volume 2) of DB2 Administration Guide for information on how to write a monitor program.
287
When an insert operation occurs on table EMP, DB2 activates NEWHIRE1 first because NEWHIRE1 was created first. Now suppose that someone drops and recreates NEWHIRE1. NEWHIRE1 now has a later timestamp than NEWHIRE2, so the next time an insert operation occurs on EMP, NEWHIRE2 is activated before NEWHIRE1. If two row triggers are defined for the same action, the trigger that was created earlier is activated first for all affected rows. Then the second trigger is activated for all affected rows. In the previous example, suppose that an INSERT statement with a fullselect inserts 10 rows into table EMP. NEWHIRE1 is activated for all 10 rows, then NEWHIRE2 is activated for all 10 rows.
288
Each after row trigger executes the triggered action once for each row in M1. If M1 is empty, the triggered action does not execute. Each after statement trigger executes the triggered action once for each execution of S1, even if M1 is empty. If any triggered actions contain SQL insert, update, or delete operations, DB2 repeats steps 1 through 5 for each operation. If an error occurs when the triggered action executes, or if a triggered action is at the 17th level of trigger cascading, DB2 rolls back all changes that are made in step 5 and all previous steps. For example, table DEPT is a parent table of EMP, with these conditions: v The DEPTNO column of DEPT is the primary key. v The WORKDEPT column of EMP is the foreign key. v The constraint is ON DELETE SET NULL. Suppose the following trigger is defined on EMP:
CREATE TRIGGER EMPRAISE AFTER UPDATE ON EMP REFERENCING NEW TABLE AS NEWEMPS FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC VALUES(CHECKEMP(TABLE NEWEMPS)); END
Also suppose that an SQL statement deletes the row with department number E21 from DEPT. Because of the constraint, DB2 finds the rows in EMP with a WORKDEPT value of E21 and sets WORKDEPT in those rows to null. This is equivalent to an update operation on EMP, which has update trigger EMPRAISE. Therefore, because EMPRAISE is an after trigger, EMPRAISE is activated after the constraint action sets WORKDEPT values to null. | | | | | | | | | | | | | | | | | | |
Interactions between triggers and tables that have multilevel security with row-level granularity
If a subject table has a security label column, the column in the transition table or transition variable that corresponds to the security label column in the subject table does not inherit the security label attribute. This means that the multilevel security check with row-level granularity is not enforced for the transition table or the transition variable. If you add a security label column to a subject table using the ALTER TABLE statement, the rules are the same as when you add any column to a subject table because the column in the transition table or the transition variable that corresponds to the security label column does not inherit the security label attribute. If the ID you are using does not have write-down privilege and you execute an INSERT or UPDATE statement, the security label value of your ID is assigned to the security label column for the rows that you are inserting or updating. When a BEFORE trigger is activated, the value of the transition variable that corresponds to the security label column is the security label of the ID if either of the following conditions is true: v The user does not have write-down privilege v The value for the security label column is not specified
289
| | | | |
If the user does not have write-down privilege, and the trigger changes the transition variable that corresponds to the security label column, the value of the security label column is changed back to the security label value of the user before the row is written to the page. Refer to Part 3 (Volume 1) of DB2 Administration Guide for a discussion about multilevel security with row-level granularity.
Now suppose that an application executes the following statements to perform a positioned update operation:
EXEC SQL BEGIN DECLARE SECTION; long hv1; EXEC SQL END DECLARE SECTION; . . . EXEC SQL DECLARE C1 CURSOR FOR SELECT A1 FROM T1 WHERE A1 IN (SELECT B1 FROM T2) . FOR UPDATE OF A1; . . EXEC SQL OPEN C1; . . . while(SQLCODE>=0 && SQLCODE!=100) { EXEC SQL FETCH C1 INTO :hv1; UPDATE T1 SET A1=5 WHERE CURRENT OF C1; }
When DB2 executes the FETCH statement that positions cursor C1 for the first time, DB2 evaluates the subselect, SELECT B1 FROM T2, to produce a result table that contains the two rows of column T2:
290
1 2
When DB2 executes the positioned UPDATE statement for the first time, trigger TR1 is activated. When the body of trigger TR1 executes, the row with value 2 is deleted from T2. However, because SELECT B1 FROM T2 is evaluated only once, when the FETCH statement is executed again, DB2 finds the second row of T1, even though the second row of T2 was deleted. The FETCH statement positions the cursor to the second row of T1, and the second row of T1 is updated. The update operation causes the trigger to be activated again, which causes DB2 to attempt to delete the second row of T2, even though that row was already deleted. To avoid processing of the second row after it should have been deleted, use a correlated subquery in the cursor declaration:
DCL C1 CURSOR FOR SELECT A1 FROM T1 X WHERE EXISTS (SELECT B1 FROM T2 WHERE X.A1 = B1) FOR UPDATE OF A1;
In this case, the subquery, SELECT B1 FROM T2 WHERE X.A1 = B1, is evaluated for each FETCH statement. The first time that the FETCH statement executes, it positions the cursor to the first row of T1. The positioned UPDATE operation activates the trigger, which deletes the second row of T2. Therefore, when the FETCH statement executes again, no row is selected, so no update operation or triggered action occurs. Example: Effect of row processing order on a triggered action: The following example shows how the order of processing rows can change the outcome of an after row trigger. Suppose that tables T1, T2, and T3 look like this:
Table T1 A1 == 1 2 Table T2 B1 == (empty) Table T3 C1 == (empty)
The contents of tables T2 and T3 after the UPDATE statement executes depend on the order in which DB2 updates the rows of T1. If DB2 updates the first row of T1 first, after the UPDATE statement and the trigger execute for the first time, the values in the three tables are:
291
Table T1 A1 == 2 2
Table T2 B1 == 2
Table T3 C1 == 2
After the second row of T1 is updated, the values in the three tables are:
Table T1 A1 == 2 3 Table T2 B1 == 2 3 Table T3 C1 == 2 2 3
However, if DB2 updates the second row of T1 first, after the UPDATE statement and the trigger execute for the first time, the values in the three tables are:
Table T1 A1 == 1 3 Table T2 B1 == 3 Table T3 C1 == 3
After the first row of T1 is updated, the values in the three tables are:
Table T1 A1 == 2 3 Table T2 B1 == 3 2 Table T3 C1 == 3 3 2
292
293
Invoking functions with distinct types . . . . . . . . . Combining distinct types with user-defined functions and LOBs .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. 371 . 372
294
295
296
Introduction to LOBs
Working with LOBs involves defining the LOBs to DB2, moving the LOB data into DB2 tables, then using SQL operations to manipulate the data. This chapter concentrates on manipulating LOB data using SQL statements. For information on defining LOBs to DB2, see Chapter 5 of DB2 SQL Reference. For information on how DB2 utilities manipulate LOB data, see Part 2 of DB2 Utility Guide and Reference. These are the basic steps for defining LOBs and moving the data into DB2: 1. Define a column of the appropriate LOB type and optionally a row identifier (ROWID) column in a DB2 table. Define only one ROWID column, even if there are multiple LOB columns in the table. If you do not create a ROWID column before you define a LOB column, DB2 creates a hidden ROWID column and appends it as the last column of the table. For information about what hidden ROWID columns are, see the description on page 298. The LOB column holds information about the LOB, not the LOB data itself. The table that contains the LOB information is called the base table. DB2 uses the ROWID column to locate your LOB data. You can define the LOB column (and optionally the ROWID column) in a CREATE TABLE or ALTER TABLE statement. You can add both a LOB column and a ROWID column to an existing table by using two ALTER TABLE statements: add the ROWID column with the first ALTER TABLE statement and the LOB column with the second. If you add a LOB column first, DB2 generates a hidden ROWID column. If you add a ROWID column after you add a LOB column, the table has two ROWID columns: the implicitly-created, hidden, column and the explicitly-created column. In this case, DB2 ensures that the values of the two ROWID columns are always identical. 2. Create a table space and table to hold the LOB data.
Copyright IBM Corp. 1983, 2008
| | | | | | | | | | | | | | | | | | |
297
The table space and table are called a LOB table space and an auxiliary table. If your base table is nonpartitioned, you must create one LOB table space and one auxiliary table for each LOB column. If your base table is partitioned, for each LOB column, you must create one LOB table space and one auxiliary table for each partition. For example, if your base table has three partitions, you must create three LOB table spaces and three auxiliary tables for each LOB column. Create these objects using the CREATE LOB TABLESPACE and CREATE AUXILIARY TABLE statements. 3. Create an index on the auxiliary table. Each auxiliary table must have exactly one index. Use CREATE INDEX for this task. 4. Put the LOB data into DB2. If the total length of a LOB column and the base table row is less than 32 KB, you can use the LOAD utility to put the data in DB2. Otherwise, you must use INSERT or UPDATE statements. Even though the data is stored in the auxiliary table, the LOAD utility statement or INSERT statement specifies the base table. Using INSERT can be difficult because your application needs enough storage to hold the entire value that goes into the LOB column. | | | | | | | | | Hidden ROWID column: If you do not create a ROWID column before you define a LOB column, DB2 creates a hidden ROWID column for you. A hidden ROWID column is not visible in the results of SELECT * statements, including those in DESCRIBE and CREATE VIEW statements. However, it is visible to all statements that refer to the column directly. DB2 assigns the GENERATED ALWAYS attribute and the name DB2_GENERATED_ROWID_FOR_LOBSnn to a hidden ROWID column. DB2 appends the identifier nn only if the column name already exists in the table. If so, DB2 appends 00 and increments by 1 until the name is unique within the row. Example: Adding a CLOB column: Suppose that you want to add a resume for each employee to the employee table. Employee resumes are no more than 5 MB in size. The employee resumes contain single-byte characters, so you can define the resumes to DB2 as CLOBs. You therefore need to add a column of data type CLOB with a length of 5 MB to the employee table. If you want to define a ROWID column explicitly, you must define it before you define the CLOB column. Execute an ALTER TABLE statement to add the ROWID column, and then execute another ALTER TABLE statement to add the CLOB column. Use statements like this:
ALTER TABLE EMP ADD ROW_ID ROWID NOT NULL GENERATED ALWAYS; COMMIT; ALTER TABLE EMP ADD EMP_RESUME CLOB(5M); COMMIT;
| |
Next, you need to define a LOB table space and an auxiliary table to hold the employee resumes. You also need to define an index on the auxiliary table. You must define the LOB table space in the same database as the associated base table. You can use statements like this:
CREATE LOB TABLESPACE RESUMETS IN DSN8D81A LOG NO; COMMIT; CREATE AUXILIARY TABLE EMP_RESUME_TAB IN DSN8D81A.RESUMETS
298
STORES DSN8810.EMP COLUMN EMP_RESUME; CREATE UNIQUE INDEX XEMP_RESUME ON EMP_RESUME_TAB; COMMIT;
If the value of bind option SQLRULES is STD, or if special register CURRENT RULES has been set in the program and has the value STD, DB2 creates the LOB table space, auxiliary table, and auxiliary index for you when you execute the ALTER statement to add the LOB column. Now that your DB2 objects for the LOB data are defined, you can load your employee resumes into DB2. To do this in an SQL application, you can define a host variable to hold the resume, copy the resume data from a file into the host variable, and then execute an UPDATE statement to copy the data into DB2. Although the data goes into the auxiliary table, your UPDATE statement specifies the name of the base table. The C language declaration of the host variable might be:
SQL TYPE is CLOB (5M) resumedata;
In this example, employeenum is a host variable that identifies the employee who is associated with a resume. After your LOB data is in DB2, you can write SQL applications to manipulate the data. You can use most SQL statements with LOBs. For example, you can use statements like these to extract information about an employees department from the resume:
EXEC SQL BEGIN DECLARE SECTION; char employeenum[6]; long deptInfoBeginLoc; long deptInfoEndLoc; SQL TYPE IS CLOB_LOCATOR resume; SQL TYPE IS CLOB_LOCATOR deptBuffer; EXEC SQL END DECLARE SECTION; . . . EXEC SQL DECLARE C1 CURSOR FOR . SELECT EMPNO, EMP_RESUME FROM EMP; . . EXEC SQL FETCH C1 INTO :employeenum, :resume; . . . EXEC SQL SET :deptInfoBeginLoc = POSSTR(:resumedata, Department Information); EXEC SQL SET :deptInfoEndLoc = POSSTR(:resumedata, Education); EXEC SQL SET :deptBuffer = SUBSTR(:resume, :deptInfoBeginLoc, :deptInfoEndLoc - :deptInfoBeginLoc);
These statements use host variables of data type large object locator (LOB locator). LOB locators let you manipulate LOB data without moving the LOB data into host variables. By using LOB locators, you need much smaller amounts of memory for your programs. LOB locators are discussed in Using LOB locators to save storage on page 305.
Chapter 14. Programming for large objects
299
Sample LOB applications: Table 28 lists the sample programs that DB2 provides to assist you in writing applications to manipulate LOB data. All programs reside in data set DSN810.SDSNSAMP.
Table 28. LOB samples shipped with DB2 Member that contains source code DSNTEJ7
Language JCL
Function Demonstrates how to create a table with LOB columns, an auxiliary table, and an auxiliary index. Also demonstrates how to load LOB data that is 32KB or less into a LOB table space. Demonstrates the use of LOB locators and UPDATE statements to move binary data into a column of type BLOB. Demonstrates how to use a locator to manipulate data of type CLOB. Demonstrates how to allocate an SQLDA for rows that include LOB data and use that SQLDA to describe an input statement and fetch data from LOB columns.
DSN8DLPL
DSN8DLRV DSNTEP2
C PL/I
For instructions on how to prepare and run the sample LOB applications, see Part 2 of DB2 Installation Guide.
300
# # # # # # # # # # #
LOB host variables that are referenced only by an SQL statement that uses a DESCRIPTOR should use the same form as declared by the precompiler. In this form, the LOB host-variable-array consists of a 31-bit length, followed by the data, followed by another 31-bit length, followed by the data, and so on. The 31-bit length must be fullword aligned. Example: Suppose that you want to allocate a LOB array of 10 elements, each with a length of 5 bytes. You need to allocate the following bytes for each element, for a total of 120 bytes: v 4 bytes for the 31-bit integer v 5 bytes for the data v 3 bytes to force fullword alignment The following examples show you how to declare LOB host variables in each supported language. In each table, the left column contains the declaration that you code in your application program. The right column contains the declaration that DB2 generates. Declarations of LOB host variables in assembler: Table 29 shows assembler language declarations for some typical LOB types.
Table 29. Example of assembler LOB variable declarations You declare this variable clob_var SQL TYPE IS CLOB 40000K DB2 generates this variable clob_var DS 0FL4 clob_var_length DS FL4 clob_var_data DS CL655351 ORG clob_var_data +(40960000-65535) dbclob_var DS 0FL4 dbclob_var_length DS FL4 dbclob_var_data DS GL655342 ORG dbclob_var_data+(8192000-65534) blob_var DS 0FL4 blob_var_length DS FL4 blob_var_data DS CL655351 ORG blob_var_data+(1048476-65535) clob_loc DS FL4 dbclob_loc DS FL4 blob_loc DS FL4
clob_loc SQL TYPE IS CLOB_LOCATOR dbclob_var SQL TYPE IS DBCLOB_LOCATOR blob_loc SQL TYPE IS BLOB_LOCATOR Notes:
1. Because assembler language allows character declarations of no more than 65535 bytes, DB2 separates the host language declarations for BLOB and CLOB host variables that are longer than 65535 bytes into two parts. 2. Because assembler language allows graphic declarations of no more than 65534 bytes, DB2 separates the host language declarations for DBCLOB host variables that are longer than 65534 bytes into two parts.
Declarations of LOB host variables in C: The following table shows C and C++ language declarations for some typical LOB types.
301
# # # # # # # # # # # # | # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 30. Examples of C language variable declarations You declare this variable SQL TYPE IS BLOB (1M) blob_var; DB2 generates this variable struct { unsigned long length; char data[1048576]; } blob_var; struct { unsigned long length; char data[409600]; } clob_var; struct { unsigned long length; sqldbchar data[4096000]; } dbclob_var; unsigned long blob_loc; unsigned long clob_loc; unsigned long dbclob_loc;
SQL TYPE IS BLOB_LOCATOR blob_loc; SQL TYPE IS CLOB_LOCATOR clob_loc; SQL TYPE IS DBCLOB_LOCATOR dbclob_loc;
Declarations of LOB host variables in COBOL: The declarations that are generated for COBOL depend on whether you use the DB2 precompiler or the DB2 coprocessor. The following table shows COBOL declarations that the DB2 precompiler generates for some typical LOB types. The declarations that the DB2 coprocessor generates might be different.
Table 31. Examples of COBOL variable declarations by the DB2 precompiler You declare this variable 01 BLOB-VAR USAGE IS SQL TYPE IS BLOB(1M). DB2 precompiler generates this variable 01 BLOB-VAR. 02 BLOB-VAR-LENGTH PIC 9(9) COMP. 02 BLOB-VAR-DATA. 49 FILLER PIC X(32767).1 49 FILLER PIC X(32767). Repeat 30 times . . . 49 FILLER PIC X(1048576-32*32767). 01 CLOB-VAR. 02 CLOB-VAR-LENGTH PIC 9(9) COMP. 02 CLOB-VAR-DATA. 49 FILLER PIC X(32767).1 49 FILLER PIC X(32767). Repeat 1248 times . . . 49 FILLER PIC X(40960000-1250*32767).
01
302
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 31. Examples of COBOL variable declarations by the DB2 precompiler (continued) You declare this variable 01 DBCLOB-VAR USAGE IS SQL TYPE IS DBCLOB(4000K). DB2 precompiler generates this variable 01 DBCLOB-VAR. DBCLOB-VAR-LENGTH PIC 9(9) COMP. 02 DBCLOB-VAR-DATA. 49 FILLER PIC G(32767) USAGE DISPLAY-1.2 49 FILLER PIC G(32767) USAGE DISPLAY-1. Repeat 123 times . . . 49 FILLER PIC G(20480000-125*32767) USAGE DISPLAY-1. 02 01 01 01 BLOB-LOC PIC S9(9) USAGE IS BINARY. CLOB-LOC PIC S9(9) USAGE IS BINARY. DBCLOB-LOC PIC S9(9) USAGE IS BINARY.
01 BLOB-LOC USAGE IS SQL TYPE IS BLOB-LOCATOR. 01 CLOB-LOC USAGE IS SQL TYPE IS CLOB-LOCATOR. 01 DBCLOB-LOC USAGE IS SQL TYPE IS DBCLOB-LOCATOR.
Notes: 1. Because the COBOL language allows character declarations of no more than 32767 bytes, for BLOB or CLOB host variables that are greater than 32767 bytes in length, DB2 creates multiple host language declarations of 32767 or fewer bytes. 2. Because the COBOL language allows graphic declarations of no more than 32767 double-byte characters, for DBCLOB host variables that are greater than 32767 double-byte characters in length, DB2 creates multiple host language declarations of 32767 or fewer double-byte characters.
Declarations of LOB host variables in Fortran: Table 32 shows Fortran declarations for some typical LOB types.
Table 32. Examples of Fortran variable declarations You declare this variable SQL TYPE IS BLOB(1M) blob_var DB2 generates this variable CHARACTER blob_var(1048580) INTEGER*4 blob_var_LENGTH CHARACTER blob_var_DATA EQUIVALENCE( blob_var(1), + blob_var_LENGTH ) EQUIVALENCE( blob_var(5), + blob_var_DATA ) CHARACTER clob_var(4096004) INTEGER*4 clob_var_length CHARACTER clob_var_data EQUIVALENCE( clob_var(1), + clob_var_length ) EQUIVALENCE( clob_var(5), + clob_var_data ) INTEGER*4 blob_loc INTEGER*4 clob_loc
# # #
Declarations of LOB host variables in PL/I: The declarations that are generated for PL/I depend on whether you use the DB2 precompiler or the DB2 coprocessor. The following table shows PL/I declarations that the DB2 precompiler generates
Chapter 14. Programming for large objects
303
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
for some typical LOB types. The declarations that the DB2 coprocessor generates might be different.
Table 33. Examples of PL/I variable declarations by the DB2 precompiler You declare this variable DCL BLOB_VAR SQL TYPE IS BLOB (1M); DB2 precompiler generates this variable DCL 1 BLOB_VAR, 2 BLOB_VAR_LENGTH FIXED BINARY(31), 2 BLOB_VAR_DATA,1 3 BLOB_VAR_DATA1(32) CHARACTER(32767), 3 BLOB_VAR_DATA2 CHARACTER(1048576-32*32767); DCL 1 CLOB_VAR, 2 CLOB_VAR_LENGTH FIXED BINARY(31), 2 CLOB_VAR_DATA,1 3 CLOB_VAR_DATA1(1250) CHARACTER(32767), 3 CLOB_VAR_DATA2 CHARACTER(40960000-1250*32767); DCL 1 DBCLOB_VAR, 2 DBCLOB_VAR_LENGTH FIXED BINARY(31), 2 DBCLOB_VAR_DATA,2 3 DBCLOB_VAR_DATA1(250) GRAPHIC(16383), 3 DBCLOB_VAR_DATA2 GRAPHIC(4096000-250*16383); DCL blob_loc FIXED BINARY(31); DCL clob_loc FIXED BINARY(31); DCL dbclob_loc FIXED BINARY(31);
DCL blob_loc SQL TYPE IS BLOB_LOCATOR; DCL clob_loc SQL TYPE IS CLOB_LOCATOR; DCL dbclob_loc SQL TYPE IS DBCLOB_LOCATOR; Notes:
1. For BLOB or CLOB host variables that are greater than 32767 bytes in length, DB2 creates PL/I host language declarations in the following way: v If the length of the LOB is greater than 32767 bytes and evenly divisible by 32767, DB2 creates an array of 32767-byte strings. The dimension of the array is length/32767. v If the length of the LOB is greater than 32767 bytes but not evenly divisible by 32767, DB2 creates two declarations: The first is an array of 32767 byte strings, where the dimension of the array, n, is length/32767. The second is a character string of length length-n*32767. 2. For DBCLOB host variables that are greater than 16383 double-byte characters in length, DB2 creates PL/I host language declarations in the following way: v If the length of the LOB is greater than 16383 characters and evenly divisible by 16383, DB2 creates an array of 16383-character strings. The dimension of the array is length/16383. v If the length of the LOB is greater than 16383 characters but not evenly divisible by 16383, DB2 creates two declarations: The first is an array of 16383 byte strings, where the dimension of the array, m, is length/16383. The second is a character string of length length-m*16383.
304
LOB materialization
# # # # # # # # # # # # # # LOB materialization means that DB2 places a LOB value into contiguous storage. Because LOB values can be very large, DB2 avoids materializing LOB data until absolutely necessary. However, DB2 must materialize LOBs when your application program: v Calls a user-defined function with a LOB as an argument v Moves a LOB into or out of a stored procedure v Assigns a LOB host variable to a LOB locator host variable v Converts a LOB from one CCSID to another The amount of storage that is used for LOB materialization depends on a number of factors including: v The size of the LOBs v The number of LOBs that need to be materialized in a statement DB2 loads LOBs into virtual pools above the bar. If insufficient space is available for LOB materialization, your application receives SQLCODE -904. Although you cannot completely avoid LOB materialization, you can minimize it by using LOB locators, rather than LOB host variables in your application programs. See Using LOB locators to save storage for information on how to use LOB locators.
305
306
EXEC SQL INCLUDE SQLCA; /**************************/ /* Declare host variables */ /**************************/ EXEC SQL BEGIN DECLARE SECTION; char userid[9]; char passwd[19]; long HV_START_DEPTINFO; long HV_START_EDUC; long HV_RETURN_CODE; SQL TYPE IS CLOB_LOCATOR HV_NEW_SECTION_LOCATOR; SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR1; SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR2; SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR3; EXEC SQL END DECLARE SECTION; /*************************************************/ /* Delete any instance of "A00130" from previous */ /* executions of this sample */ /*************************************************/ EXEC SQL DELETE FROM EMP_RESUME WHERE EMPNO = A00130; /*************************************************/ /* Use a single row select to get the document */ 2 /*************************************************/ EXEC SQL SELECT RESUME INTO :HV_DOC_LOCATOR1 FROM EMP_RESUME WHERE EMPNO = 000130 AND RESUME_FORMAT = ascii; /*****************************************************/ /* Use the POSSTR function to locate the start of */ /* sections "Department Information" and "Education" */ 3 /*****************************************************/ EXEC SQL SET :HV_START_DEPTINFO = POSSTR(:HV_DOC_LOCATOR1, Department Information); EXEC SQL SET :HV_START_EDUC = POSSTR(:HV_DOC_LOCATOR1, Education); Figure 122. Example of deferring evaluation of LOB expressions (Part 1 of 2) 1
307
/*******************************************************/ /* Replace Department Information section with nothing */ /*******************************************************/ EXEC SQL SET :HV_DOC_LOCATOR2 = SUBSTR(:HV_DOC_LOCATOR1, 1, :HV_START_DEPTINFO -1) || SUBSTR (:HV_DOC_LOCATOR1, :HV_START_EDUC); /*******************************************************/ /* Associate a new locator with the Department */ /* Information section */ /*******************************************************/ EXEC SQL SET :HV_NEW_SECTION_LOCATOR = SUBSTR(:HV_DOC_LOCATOR1, :HV_START_DEPTINFO, :HV_START_EDUC -:HV_START_DEPTINFO); /*******************************************************/ /* Append the Department Information to the end */ /* of the resume */ /*******************************************************/ EXEC SQL SET :HV_DOC_LOCATOR3 = :HV_DOC_LOCATOR2 || :HV_NEW_SECTION_LOCATOR; /*******************************************************/ /* Store the modified resume in the table. This is */ 4 /* where the LOB data really moves. */ /*******************************************************/ EXEC SQL INSERT INTO EMP_RESUME VALUES (A00130, ascii, :HV_DOC_LOCATOR3, DEFAULT); /*********************/ /* Free the locators */ 5 /*********************/ EXEC SQL FREE LOCATOR :HV_DOC_LOCATOR1, :HV_DOC_LOCATOR2, :HV_DOC_LOCATOR3; Figure 122. Example of deferring evaluation of LOB expressions (Part 2 of 2)
Notes: 1 2 Declare the LOB locators here. This SELECT statement associates LOB locator HV_DOC_LOCATOR1 with the value of column RESUME for employee number 000130. The next five SQL statements use LOB locators to manipulate the resume data without moving the data. Evaluation of the LOB expressions in the previous statements has been deferred until execution of this INSERT statement. Free all LOB locators to release them from their associated values.
3 4 5
308
By using these tables, you can obtain the same result as you would with a VALUES INTO or SET statement. Example: Suppose that the encoding scheme of the following statement is EBCDIC:
SET : unicode_hv = SUBSTR(:Unicode_lob_locator,X,Y);
DB2 must materialize the LOB that is specified by :Unicode_lob_locator and convert that entire LOB to EBCDIC before executing the statement. To avoid materialization and conversion, you can execute the following statement, which produces the same result but is processed by the Unicode encoding scheme of the table:
SELECT SUBSTR(:Unicode_lob_locator,X,Y) INTO :unicode_hv FROM SYSIBM.SYSDUMMYU;
309
310
311
v Writing and preparing the user-defined function This step is necessary only for an external user-defined function. The person who performs this step is called the user-defined function implementer. v Defining the user-defined function to DB2 The person who performs this step is called the user-defined function definer. v Invoking the user-defined function from an SQL application The person who performs this step is called the user-defined function invoker.
. . .
. . .
. . .
. . .
312
The user-defined function definer executes this CREATE FUNCTION statement to register CALC_BONUS to DB2:
CREATE FUNCTION CALC_BONUS(DECIMAL(9,2),DECIMAL(9,2)) RETURNS DECIMAL(9,2) EXTERNAL NAME CBONUS PARAMETER STYLE SQL LANGUAGE COBOL;
The definer then grants execute authority on CALC_BONUS to all invokers. User-defined function invokers write and prepare application programs that invoke CALC_BONUS. An invoker might write a statement like this, which uses the user-defined function to update the BONUS field in the employee table:
UPDATE EMP SET BONUS = CALC_BONUS(SALARY,COMM);
Member that contains source code DSN8DUAD DSN8DUCD DSN8DUAT DSN8DUCT DSN8EUDN DSN8EUMN DSN8DUCY DSN8DUTI DSN8DUTI DSN8DUTI DSN8DUWF
Purpose Converts the current date to a user-specified format Converts a date from one format to another Converts the current time to a user-specified format Converts a time from one format to another Returns the day of the week for a user-specified date Returns the month for a user-specified date Formats a floating-point number as a currency value Returns the unqualified table name for a table, view, or alias Returns the qualifier for a table, view, or alias Returns the location for a table, view, or alias Returns a table of weather information from a EBCDIC data set
Notes: 1. This version of ALTDATE has one input parameter, of type VARCHAR(13). 2. This version of ALTDATE has three input parameters, of type VARCHAR(17), VARCHAR(13), and VARCHAR(13). 3. This version of ALTTIME has one input parameter, of type VARCHAR(14). 4. This version of ALTTIME has three input parameters, of type VARCHAR(11), VARCHAR(14), and VARCHAR(14).
313
Member DSN8DUWC contains a client program that shows you how to invoke the WEATHER user-defined table function. Member DSNTEJ2U shows you how to define and prepare the sample user-defined functions and the client program.
Characteristic User-defined function name Input parameter types and encoding schemes Output parameter types and encoding schemes Specific name External name Language
No
Yes
Yes
314
Table 35. Characteristics of a user-defined function (continued) CREATE FUNCTION or ALTER FUNCTION option NO SQL CONTAINS SQL READS SQL DATA MODIFIES SQL DATA SOURCE PARAMETER STYLE SQL PARAMETER STYLE JAVA FENCED RETURNS NULL ON NULL INPUT CALLED ON NULL INPUT EXTERNAL ACTION NO EXTERNAL ACTION NO SCRATCHPAD SCRATCHPAD length NO FINAL CALL FINAL CALL ALLOW PARALLEL DISALLOW PARALLEL NO COLLID COLLID collection-id WLM ENVIRONMENT name WLM ENVIRONMENT name,* ASUTIME NO LIMIT ASUTIME LIMIT integer STAY RESIDENT NO STAY RESIDENT YES PROGRAM TYPE MAIN PROGRAM TYPE SUB SECURITY DB2 SECURITY USER SECURITY DEFINER RUN OPTIONS options NO DBINFO DBINFO CARDINALITY integer STATIC DISPATCH Valid in sourced function? No Valid in external function? Yes5 Valid in SQL function? Yes6
Yes No No No No No No No No No No No No No
No Yes Yes Yes Yes Yes Yes Yes5 Yes Yes Yes Yes Yes Yes
7
No No No Yes8 Yes No No No No No No No No No
Parameter style Address space for user-defined functions Call with null input External actions Scratchpad specification Call function after SQL processing Consider function for parallel processing Package collection WLM environment CPU time for a function invocation Load module stays in memory Program type Security
Run-time options Pass DB2 environment information Expected number of rows returned Function resolution is based on the declared parameter types SQL expression that evaluates to the value returned by the function Encoding scheme for all string parameters
No No No No
No No No Yes
RETURN expression
No
No
Yes
No
Yes
Yes
315
Table 35. Characteristics of a user-defined function (continued) CREATE FUNCTION or ALTER FUNCTION option PARAMETER VARCHAR NULTERM PARAMETER VARCHAR STRUCTURE9 Valid in sourced function? No Valid in external function? Yes Valid in SQL function? No
Characteristic
# # # # #
For functions that are defined as LANGUAGE C, the representation of VARCHAR parameters and, if applicable, the returned result.
STOP AFTER SYSTEM DEFAULT FAILURES STOP AFTER n FAILURES CONTINUE AFTER FAILURE
No
Yes
No
| 1. RETURNS TABLE and CARDINALITY are valid only for user-defined table functions. For a single query, you can override the CARDINALITY value by specifying a CARDINALITY clause for the invocation of a user-defined | table function in the SELECT statement. For additional information, see Special techniques to influence access | path selection on page 774. |
2. An SQL user-defined function can return only one parameter. 3. LANGUAGE SQL is not valid for an external user-defined function. 4. Only LANGUAGE SQL is valid for an SQL user-defined function. 5. MODIFIES SQL DATA and ALLOW PARALLEL are not valid for user-defined table functions. 6. MODIFIES SQL DATA and NO SQL are not valid for SQL user-defined functions.
| 7. PARAMETER STYLE JAVA is valid only with LANGUAGE JAVA. PARAMETER STYLE SQL is valid only with
LANGUAGE values other than LANGUAGE JAVA. 8. RETURNS NULL ON NULL INPUT is not valid for an SQL user-defined function.
# 9. The PARAMETER VARCHAR clause can be specified in CREATE FUNCTION statements only.
For a complete explanation of the parameters in a CREATE FUNCTION or ALTER FUNCTION statement, see Chapter 5 of DB2 SQL Reference.
| |
The output from the user-defined function is of type float, but users require integer output for their SQL statements. The user-defined function is written in C and contains no SQL statements. The function is defined to stop when the number of abnormal terminations is equal to 3.
316
Example: Definition for an external user-defined scalar function that overloads an operator: A programmer has written a user-defined function that overloads the built-in SQL division operator (/). That is, this user-defined function is invoked when an application program executes a statement like either of the following:
UPDATE TABLE1 SET INTCOL1=INTCOL2/INTCOL3; UPDATE TABLE1 SET INTCOL1="/"(INTCOL2,INTCOL3);
The user-defined function takes two integer values as input. The output from the user-defined function is of type integer. The user-defined function is in the MATH schema, is written in assembler, and contains no SQL statements. This CREATE FUNCTION statement defines the user-defined function:
CREATE FUNCTION MATH."/" (INT, INT) RETURNS INTEGER SPECIFIC DIVIDE EXTERNAL NAME DIVIDE LANGUAGE ASSEMBLE PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION FENCED;
Suppose that you want the FINDSTRING user-defined function to work on BLOB data types, as well as CLOB types. You can define another instance of the user-defined function that specifies a BLOB type as input:
CREATE FUNCTION FINDSTRING (BLOB(500K), VARCHAR(200)) RETURNS INTEGER CAST FROM FLOAT SPECIFIC FINDSTRINBLOB EXTERNAL NAME FNDBLOB LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION FENCED STOP AFTER 3 FAILURES;
Each instance of FINDSTRING uses a different application program to implement the user-defined function. Example: Definition for a sourced user-defined function: Suppose you need a user-defined function that finds a string in a value with a distinct type of BOAT. BOAT is based on a BLOB data type. User-defined function FINDSTRING has already been defined. FINDSTRING takes a BLOB data type and performs the required function. The specific name for FINDSTRING is FINDSTRINBLOB. You can therefore define a sourced user-defined function based on FINDSTRING to do the string search on values of type BOAT. This CREATE FUNCTION statement defines the sourced user-defined function:
CREATE FUNCTION FINDSTRING (BOAT, VARCHAR(200)) RETURNS INTEGER SPECIFIC FINDSTRINBOAT SOURCE SPECIFIC FINDSTRINBLOB;
Example: Definition for an SQL user-defined function: You can define an SQL user-defined function for the tangent of a value by using the existing built-in SIN and COS functions:
317
CREATE FUNCTION TAN (X DOUBLE) RETURNS DOUBLE LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION DETERMINISTIC RETURN SIN(X)/COS(X);
Example: Definition for an external user-defined table function: An application programmer develops a user-defined function that receives two values and returns a table. The two input values are: v A character string of maximum length 30 that describes a subject v A character string of maximum length 255 that contains text to search for The user-defined function scans documents on the subject for the search string and returns a list of documents that match the search criteria, with an abstract for each document. The list is in the form of a two-column table. The first column is a character column of length 16 that contains document IDs. The second column is a varying-character column of maximum length 5000 that contains document abstracts. The user-defined function is written in COBOL, uses SQL only to perform queries, always produces the same output for given input, and should not execute as a parallel task. The program is reentrant, and successive invocations of the user-defined function share information. You expect an invocation of the user-defined function to return about 20 rows. The following CREATE FUNCTION statement defines the user-defined function:
CREATE FUNCTION DOCMATCH (VARCHAR(30), VARCHAR(255)) RETURNS TABLE (DOC_ID CHAR(16), DOC_ABSTRACT VARCHAR(5000)) EXTERNAL NAME DOCMTCH LANGUAGE COBOL PARAMETER STYLE SQL READS SQL DATA DETERMINISTIC NO EXTERNAL ACTION FENCED SCRATCHPAD FINAL CALL DISALLOW PARALLEL CARDINALITY 20;
318
v DRDA access using three-part names or aliases for three-part names v DRDA access using CONNECT or SET CONNECTION statements The user-defined function and the application that calls it can access the same remote site if both use the same protocol. You can write an external user-defined function in assembler, C, C++, COBOL, PL/I, or Java. User-defined functions that are written in COBOL can include object-oriented extensions, just as other DB2 COBOL programs can. User-defined functions that are written in Java follow coding guidelines and restrictions specific to Java. For information about writing Java user-defined functions, see DB2 Application Programming Guide and Reference for Java. The following sections include additional information that you need when you write a user-defined function: v Restrictions on user-defined function programs v Coding your user-defined function as a main program or as a subprogram v Parallelism considerations on page 320 v Passing parameter values to and from a user-defined function on page 321 v Examples of receiving parameters in a user-defined function on page 333 v Using special registers in a user-defined function on page 342 v Using a scratchpad in a user-defined function on page 344 v Accessing transition tables in a user-defined function or stored procedure on page 345
319
that the external user-defined function uses. When a main program ends, Language Environment closes files and releases dynamically allocated storage. If you code your user-defined function as a subprogram and manage the storage and files yourself, you can get better performance. The user-defined function should always free any allocated storage before it exits. To keep data between invocations of the user-defined function, use a scratchpad. You must code a user-defined table function that accesses external resources as a subprogram. Also ensure that the definer specifies the EXTERNAL ACTION parameter in the CREATE FUNCTION or ALTER FUNCTION statement. Program variables for a subprogram persist between invocations of the user-defined function, and use of the EXTERNAL ACTION parameter ensures that the user-defined function stays in the same address space from one invocation to another.
Parallelism considerations
If the definer specifies the parameter ALLOW PARALLEL in the definition of a user-defined scalar function, and the invoking SQL statement runs in parallel, the function can run under a parallel task. DB2 executes a separate instance of the user-defined function for each parallel task. When you write your function program, you need to understand how the following parameter values interact with ALLOW PARALLEL so that you can avoid unexpected results: v SCRATCHPAD When an SQL statement invokes a user-defined function that is defined with the ALLOW PARALLEL parameter, DB2 allocates one scratchpad for each parallel task of each reference to the function. This can lead to unpredictable or incorrect results. For example, suppose that the user-defined function uses the scratchpad to count the number of times it is invoked. If a scratchpad is allocated for each parallel task, this count is the number of invocations done by the parallel task and not for the entire SQL statement, which is not the desired result. v FINAL CALL If a user-defined function performs an external action, such as sending a note, for each final call to the function, one note is sent for each parallel task instead of once for the function invocation. v EXTERNAL ACTION Some user-defined functions with external actions can receive incorrect results if the function is executed by parallel tasks. For example, if the function sends a note for each initial call to the function, one note is sent for each parallel task instead of once for the function invocation. v NOT DETERMINISTIC A user-defined function that is not deterministic can generate incorrect results if it is run under a parallel task. For example, suppose that you execute the following query under parallel tasks:
SELECT * FROM T1 WHERE C1 = COUNTER();
COUNTER is a user-defined function that increments a variable in the scratchpad every time it is invoked. Counter is nondeterministic because the same input does not always produce the same output. Table T1 contains one column, C1, that has the following values:
320
1 2 3 4 5 6 7 8 9 10
When the query is executed with no parallelism, DB2 invokes COUNTER once for each row of table T1, and there is one scratchpad for counter, which DB2 initializes the first time that COUNTER executes. COUNTER returns 1 the first time it executes, 2 the second time, and so on. The result table for the query has the following values:
1 2 3 4 5 6 7 8 9 10
Now suppose that the query is run with parallelism, and DB2 creates three parallel tasks. DB2 executes the predicate WHERE C1 = COUNTER() for each parallel task. This means that each parallel task invokes its own instance of the user-defined function and has its own scratchpad. DB2 initializes the scratchpad to zero on the first call to the user-defined function for each parallel task. If parallel task 1 processes rows 1 to 3, parallel task 2 processes rows 4 to 6, and parallel task 3 processes rows 7 to 10, the following results occur: When parallel task 1 executes, C1 has values 1, 2, and 3, and COUNTER returns values 1, 2, and 3, so the query returns values 1, 2, and 3. When parallel task 2 executes, C1 has values 4, 5, and 6, but COUNTER returns values 1, 2, and 3, so the query returns no rows. When parallel task 3, executes, C1 has values 7, 8, 9, and 10, but COUNTER returns values 1, 2, 3, and 4, so the query returns no rows. Thus, instead of returning the 10 rows that you might expect from the query, DB2 returns only 3 rows.
321
Input parameter values: DB2 obtains the input parameters from the invokers parameter list, and your user-defined function receives those parameters according to the rules of the host language in which the user-defined function is written. The number of input parameters is the same as the number of parameters in the user-defined function invocation. If one of the parameters in the function invocation is an expression, DB2 evaluates the expression and assigns the result of the expression to the parameter.
322
# #
For all data types except LOBs, ROWIDs, locators, and VARCHAR (with C language), see the tables listed in Table 36 for the host data types that are compatible with the data types in the user-defined function definition.
Table 36. Listing of tables of compatible data types Language Assembler C COBOL PL/I Compatible data types table Table 12 on page 153 Table 14 on page 177 Table 17 on page 211 Table 21 on page 243
For LOBs, ROWIDs, and locators, see Table 37 for the assembler data types that are compatible with the data types in the user-defined function definition.
Table 37. Compatible assembler language declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) Assembler declaration DS FL4
If n <= 65535: var DS 0FL4 var_length DS FL4 var_data DS CLn If n > 65535: var DS 0FL4 var_length DS FL4 var_data DS CL65535 ORG var_data+(n-65535)
CLOB(n)
If n <= 65535: var DS 0FL4 var_length DS FL4 var_data DS CLn If n > 65535: var DS 0FL4 var_length DS FL4 var_data DS CL65535 ORG var_data+(n-65535)
DBCLOB(n)
If n (=2*n) <= 65534: var DS 0FL4 var_length DS FL4 var_data DS CLm If n > 65534: var DS 0FL4 var_length DS FL4 var_data DS CL65534 ORG var_data+(m-65534)
ROWID
DS HL2,CL40
323
# #
For LOBs, ROWIDs, VARCHARs, and locators see Table 38 for the C data types that are compatible with the data types in the user-defined function definition.
Table 38. Compatible C language declarations for LOBs, ROWIDs, VARCHARs, and locators SQL data type in definition1 TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) C declaration unsigned long
struct {unsigned long length; char data[n]; } var; struct {unsigned long length; char var_data[n]; } var; struct {unsigned long length; sqldbchar data[n]; } var; struct { short int length; char data[40]; } var; If PARAMETER VARCHAR NULTERM is specified or implied: char data[n+1]; If PARAMETER VARCHAR STRUCTURE is specified: struct {short len; char data[n]; } var;
CLOB(n)
DBCLOB(n)
|
ROWID
# # # # # # # # #
VARCHAR(n)2
Notes: 1. The SQLUDF file, which is in data set DSN810.SDSNC.H, includes the typedef sqldbchar. Using sqldbchar lets you manipulate DBCS and Unicode UTF-16 data in the same format in which it is stored in DB2. sqldbchar also makes applications easier to port to other DB2 platforms.
# #
2. This row does not apply to VARCHAR(n) FOR BIT DATA. BIT DATA is always passed in a structured representation.
For LOBs, ROWIDs, and locators, see Table 39 for the COBOL data types that are compatible with the data types in the user-defined function definition.
Table 39. Compatible COBOL declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR COBOL declaration 01 var PIC S9(9) USAGE IS BINARY.
324
Table 39. Compatible COBOL declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition BLOB(n) COBOL declaration If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC X(n). If length > 32767: 01 var. var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC X(32767). 49 FILLER PIC X(32767). . . . 49 FILLER PIC X(mod(n,32767)). 02 CLOB(n) If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC X(n). If length > 32767: 01 var. var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC X(32767). 49 FILLER PIC X(32767). . . . 49 FILLER PIC X(mod(n,32767)). 02 DBCLOB(n) If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC G(n) USAGE DISPLAY-1. If length > 32767: 01 var. var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC G(32767) USAGE DISPLAY-1. 49 FILLER PIC G(32767). USAGE DISPLAY-1. . . . 49 FILLER PIC G(mod(n,32767)) USAGE DISPLAY-1. 02
325
Table 39. Compatible COBOL declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition ROWID COBOL declaration 01 var. 49 var-LEN PIC 9(4) USAGE COMP. 49 var-DATA PIC X(40).
For LOBs, ROWIDs, and locators, see Table 40 for the PL/I data types that are compatible with the data types in the user-defined function definition.
Table 40. Compatible PL/I declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) PL/I BIN FIXED(31)
If n <= 32767: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA CHAR(n); If n > 32767: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) CHAR(32767), 03 var_DATA2 CHAR(mod(n,32767));
CLOB(n)
If n <= 32767: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA CHAR(n); If n > 32767: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) CHAR(32767), 03 var_DATA2 CHAR(mod(n,32767));
326
Table 40. Compatible PL/I declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition DBCLOB(n) PL/I If n <= 16383: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA GRAPHIC(n); If n > 16383: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) GRAPHIC(16383), 03 var_DATA2 GRAPHIC(mod(n,16383)); ROWID CHAR(40) VAR;
Result parameters: Set these values in your user-defined function before exiting. For a user-defined scalar function, you return one result parameter. For a user-defined table function, you return the same number of parameters as columns in the RETURNS TABLE clause of the CREATE FUNCTION statement. DB2 allocates a buffer for each result parameter value and passes the buffer address to the user-defined function. Your user-defined function places each result parameter value in its buffer. You must ensure that the length of the value you place in each output buffer does not exceed the buffer length. Use the SQL data type and length in the CREATE FUNCTION statement to determine the buffer length. See Passing parameter values to and from a user-defined function on page 321 to determine the host data type to use for each result parameter value. If the CREATE FUNCTION statement contains a CAST FROM clause, use a data type that corresponds to the SQL data type in the CAST FROM clause. Otherwise, use a data type that corresponds to the SQL data type in the RETURNS or RETURNS TABLE clause. To improve performance for user-defined table functions that return many columns, you can pass values for a subset of columns to the invoker. For example, a user-defined table function might be defined to return 100 columns, but the invoker needs values for only two columns. Use the DBINFO parameter to indicate to DB2 the columns for which you will return values. Then return values for only those columns. See the explanation of DBINFO on page 331 for information about how to indicate the columns of interest. Input parameter indicators: These are SMALLINT values, which DB2 sets before it passes control to the user-defined function. You use the indicators to determine whether the corresponding input parameters are null. The number and order of the indicators are the same as the number and order of the input parameters. On entry to the user-defined function, each indicator contains one of these values: 0 negative The input parameter value is not null. The input parameter value is null.
327
Code the user-defined function to check all indicators for null values unless the user-defined function is defined with RETURNS NULL ON NULL INPUT. A user-defined function defined with RETURNS NULL ON NULL INPUT executes only if all input parameters are not null. Result indicators: These are SMALLINT values, which you must set before the user-defined function ends to indicate to the invoking program whether each result parameter value is null. A user-defined scalar function has one result indicator. A user-defined table function has the same number of result indicators as the number of result parameters. The order of the result indicators is the same as the order of the result parameters. Set each result indicator to one of these values: 0 or positive negative | | | | | | | The result parameter is not null. The result parameter is null.
SQLSTATE value: This CHAR(5) value represents the SQLSTATE that is passed in to the program from the database manager. The initial value is set to 00000. Although the SQLSTATE is usually not set by the program, it can be set as the result SQLSTATE that is used to return an error or a warning. Returned values that start with anything other than 00, 01, or 02 are error conditions. Refer to DB2 Codes for more information about the valid SQLSTATE values that a program may generate. User-defined function name: DB2 sets this value in the parameter list before the user-defined function executes. This value is VARCHAR(257): 128 bytes for the schema name, 1 byte for a period, and 128 bytes for the user-defined function name. If you use the same code to implement multiple versions of a user-defined function, you can use this parameter to determine which version of the function the invoker wants to execute. Specific name: DB2 sets this value in the parameter list before the user-defined function executes. This value is VARCHAR(128) and is either the specific name from the CREATE FUNCTION statement or a specific name that DB2 generated. If you use the same code to implement multiple versions of a user-defined function, you can use this parameter to determine which version of the function the invoker wants to execute.
| | |
# # # # # # # #
Diagnostic message: Your user-defined function can set this CHAR or VARCHAR value to a character string of up to 70 bytes before exiting. Use this area to pass descriptive information about an error or warning to the invoker. DB2 allocates a buffer for this area and passes you the buffer address in the parameter list. At least the first 17 bytes of the value you put in the buffer appear in the SQLERRMC field of the SQLCA that is returned to the invoker. The exact number of bytes depends on the number of other tokens in SQLERRMC. Do not use X'FF' in your diagnostic message. DB2 uses this value to delimit tokens. Scratchpad: If the definer specified SCRATCHPAD in the CREATE FUNCTION statement, DB2 allocates a buffer for the scratchpad area and passes its address to the user-defined function. Before the user-defined function is invoked for the first time in an SQL statement, DB2 sets the length of the scratchpad in the first 4 bytes of the buffer and then sets the scratchpad area to X'00'. DB2 does not reinitialize the scratchpad between invocations of a correlated subquery.
328
You must ensure that your user-defined function does not write more bytes to the scratchpad than the scratchpad length. Call type: For a user-defined scalar function, if the definer specified FINAL CALL in the CREATE FUNCTION statement, DB2 passes this parameter to the user-defined function. For a user-defined table function, DB2 always passes this parameter to the user-defined function. On entry to a user-defined scalar function, the call type parameter has one of the following values: -1 This is the first call to the user-defined function for the SQL statement. For a first call, all input parameters are passed to the user-defined function. In addition, the scratchpad, if allocated, is set to binary zeros. This is a normal call. For a normal call, all the input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This is a final call. For a final call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This type of final call occurs when the invoking application explicitly closes a cursor. When a value of 1 is passed to a user-defined function, the user-defined function can execute SQL statements. 255 This is a final call. For a final call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This type of final call occurs when the invoking application executes a COMMIT or ROLLBACK statement, or when the invoking application abnormally terminates. When a value of 255 is passed to the user-defined function, the user-defined function cannot execute any SQL statements, except for CLOSE CURSOR. If the user-defined function executes any close cursor statements during this type of final call, the user-defined function should tolerate SQLCODE -501 because DB2 might have already closed cursors before the final call. During the first call, your user-defined scalar function should acquire any system resources it needs. During the final call, the user-defined scalar function should release any resources it acquired during the first call. The user-defined scalar function should return a result value only during normal calls. DB2 ignores any results that are returned during a final call. However, the user-defined scalar function can set the SQLSTATE and diagnostic message area during the final call. If an invoking SQL statement contains more than one user-defined scalar function, and one of those user-defined functions returns an error SQLSTATE, DB2 invokes all of the user-defined functions for a final call, and the invoking SQL statement receives the SQLSTATE of the first user-defined function with an error. On entry to a user-defined table function, the call type parameter has one of the following values: -2 This is the first call to the user-defined function for the SQL statement. A first call occurs only if the FINAL CALL keyword is specified in the
329
user-defined function definition. For a first call, all input parameters are passed to the user-defined function. In addition, the scratchpad, if allocated, is set to binary zeros. -1 This is the open call to the user-defined function by an SQL statement. If FINAL CALL is not specified in the user-defined function definition, all input parameters are passed to the user-defined function, and the scratchpad, if allocated, is set to binary zeros during the open call. If FINAL CALL is specified for the user-defined function, DB2 does not modify the scratchpad. This is a fetch call to the user-defined function by an SQL statement. For a fetch call, all input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This is a close call. For a close call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This is a final call. This type of final call occurs only if FINAL CALL is specified in the user-defined function definition. For a final call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This type of final call occurs when the invoking application executes a CLOSE CURSOR statement. 255 This is a final call. For a final call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This type of final call occurs when the invoking application executes a COMMIT or ROLLBACK statement, or when the invoking application abnormally terminates. When a value of 255 is passed to the user-defined function, the user-defined function cannot execute any SQL statements, except for CLOSE CURSOR. If the user-defined function executes any close cursor statements during this type of final call, the user-defined function should tolerate SQLCODE -501 because DB2 might have already closed cursors before the final call. If a user-defined table function is defined with FINAL CALL, the user-defined function should allocate any resources it needs during the first call and release those resources during the final call that sets a value of 2. If a user-defined table function is defined with NO FINAL CALL, the user-defined function should allocate any resources it needs during the open call and release those resources during the close call. During a fetch call, the user-defined table function should return a row. If the user-defined function has no more rows to return, it should set the SQLSTATE to 02000. During the close call, a user-defined table function can set the SQLSTATE and diagnostic message area. If a user-defined table function is invoked from a subquery, the user-defined table function receives a CLOSE call for each invocation of the subquery within the higher level query, and a subsequent OPEN call for the next invocation of the subquery within the higher level query.
330
DBINFO: If the definer specified DBINFO in the CREATE FUNCTION statement, DB2 passes the DBINFO structure to the user-defined function. DBINFO contains information about the environment of the user-defined function caller. It contains the following fields, in the order shown: Location name length An unsigned 2-byte integer field. It contains the length of the location name in the next field. Location name A 128-byte character field. It contains the name of the location to which the invoker is currently connected. Authorization ID length An unsigned 2-byte integer field. It contains the length of the authorization ID in the next field. Authorization ID A 128-byte character field. It contains the authorization ID of the application from which the user-defined function is invoked, padded on the right with blanks. If this user-defined function is nested within other user-defined functions, this value is the authorization ID of the application that invoked the highest-level user-defined function. | | | Subsystem code page A 48-byte structure that consists of 10 integer fields and an eight-byte reserved area. These fields provide information about the CCSIDs of the subsystem from which the user-defined function is invoked. Table qualifier length An unsigned 2-byte integer field. It contains the length of the table qualifier in the next field. If the table name field is not used, this field contains 0. Table qualifier A 128-byte character field. It contains the qualifier of the table that is specified in the table name field. Table name length An unsigned 2-byte integer field. It contains the length of the table name in the next field. If the table name field is not used, this field contains 0. Table name A 128-byte character field. This field contains the name of the table that the UPDATE or INSERT modifies if the reference to the user-defined function in the invoking SQL statement is in one of the following places: v The right side of a SET clause in an UPDATE statement v In the VALUES list of an INSERT statement Otherwise, this field is blank. Column name length An unsigned 2-byte integer field. It contains the length of the column name in the next field. If no column name is passed to the user-defined function, this field contains 0. Column name A 128-byte character field. This field contains the name of the column that the UPDATE or INSERT modifies if the reference to the user-defined function in the invoking SQL statement is in one of the following places: v The right side of a SET clause in an UPDATE statement v In the VALUES list of an INSERT statement
331
Otherwise, this field is blank. Product information An 8-byte character field that identifies the product on which the user-defined function executes. This field has the form pppvvrrm, where: v ppp is a 3-byte product code: ARI DSN QSQ SQL DB2 Server for VSE & VM DB2 UDB for z/OS DB2 UDB for iSeries DB2 UDB for Linux, UNIX, and Windows
v vv is a 2-digit version identifier. v rr is a 2-digit release identifier. v m is a 1-digit maintenance level identifier. | | Reserved area 2 bytes. Operating system A 4-byte integer field. It identifies the operating system on which the program that invokes the user-defined function runs. The value is one of these: 0 1 3 4 5 6 7 8 13 15 16 | | 18 19 24 | | | | | | 25 26 27 28 29 400 Unknown OS/2 Windows AIX Windows NT HP-UX Solaris OS/390 or z/OS Siemens Nixdorf Windows 95 SCO UNIX Linux DYNIX/ptx Linux for S/390 Linux for zSeries Linux/IA64 Linux/PPC Linux/PPC64 Linux/AMD64 iSeries
Number of entries in table function column list An unsigned 2-byte integer field.
332
Reserved area 26 bytes. Table function column list pointer If a table function is defined, this field is a pointer to an array that contains 1000 2-byte integers. DB2 dynamically allocates the array. If a table function is not defined, this pointer is null. Only the first n entries, where n is the value in the field entitled number of entries in table function column list, are of interest. n is greater than or equal to 0 and less than or equal to the number result columns defined for the user-defined function in the RETURNS TABLE clause of the CREATE FUNCTION statement. The values correspond to the numbers of the columns that the invoking statement needs from the table function. A value of 1 means the first defined result column, 2 means the second defined result column, and so on. The values can be in any order. If n is equal to 0, the first array element is 0. This is the case for a statement like the following one, where the invoking statement needs no column values.
SELECT COUNT(*) FROM TABLE(TF(...)) AS QQ
This array represents an opportunity for optimization. The user-defined function does not need to return all values for all the result columns of the table function. Instead, the user-defined function can return only those columns that are needed in the particular context, which you identify by number in the array. However, if this optimization complicates the user-defined function logic enough to cancel the performance benefit, you might choose to return every defined column. Unique application identifier This field is a pointer to a string that uniquely identifies the applications connection to DB2. The string is regenerated for each connection to DB2. The string is the LUWID, which consists of a fully-qualified LU network name followed by a period and an LUW instance number. The LU network name consists of a 1- to 8-character network ID, a period, and a 1- to 8-character network LU name. The LUW instance number consists of 12 hexadecimal characters that uniquely identify the unit of work. Reserved area 20 bytes. Examples of receiving parameters in a user-defined function has examples of declarations of passed parameters in each language. If you write your user-defined function in C or C++, you can use the declarations in member SQLUDF of DSN810.SDSNC.H for many of the passed parameters. To include SQLUDF, make these changes to your program: v Put this statement in your source code:
#include <sqludf.h>
v Include the DSN810.SDSNC.H data set in the SYSLIB concatenation for the compile step of your program preparation job. v Specify the NOMARGINS and NOSEQUENCE options in the compile step of your program preparation job.
333
These examples assume that the user-defined function is defined with the SCRATCHPAD, FINAL CALL, and DBINFO parameters. Assembler: Figure 125 shows the parameter conventions for a user-defined scalar function that is written as a main program that receives two parameters and returns one result. For an assembler language user-defined function that is a subprogram, the conventions are the same. In either case, you must include the CEEENTRY and CEEEXIT macros.
MYMAIN CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS USING PROGAREA,R13 L MVC L MVC L MVC LH LTR BM L MVC LH LTR BM . . . NULLIN L MVC L MVC R7,8(R1) 0(9,R7),RESULT R7,20(R1) 0(2,R7),=H0 GET ADDRESS OF AREA FOR RESULT MOVE A VALUE INTO RESULT AREA GET ADDRESS OF AREA FOR RESULT IND MOVE A VALUE INTO INDICATOR AREA R7,0(R1) PARM1(4),0(R7) R7,4(R1) PARM2(4),0(R7) R7,12(R1) F_IND1(2),0(R7) R7,F_IND1 R7,R7 NULLIN R7,16(R1) F_IND2(2),0(R7) R7,F_IND2 R7,R7 NULLIN GET POINTER TO PARM1 MOVE VALUE INTO LOCAL COPY GET POINTER TO PARM2 MOVE VALUE INTO LOCAL COPY GET POINTER TO INDICATOR 1 MOVE PARM1 INDICATOR TO LOCAL MOVE PARM1 INDICATOR INTO R7 CHECK IF IT IS NEGATIVE IF SO, PARM1 IS NULL GET POINTER TO INDICATOR 2 MOVE PARM2 INDICATOR TO LOCAL MOVE PARM2 INDICATOR INTO R7 CHECK IF IT IS NEGATIVE IF SO, PARM2 IS NULL OF PARM1 OF PARM2 STORAGE
STORAGE
. . . CEETERM RC=0 ******************************************************************* * VARIABLE DECLARATIONS AND EQUATES * ******************************************************************* R1 EQU 1 REGISTER 1 R7 EQU 7 REGISTER 7 PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK LTORG , PLACE LITERAL POOL HERE PROGAREA DSECT ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART PARM1 DS F PARAMETER 1 PARM2 DS F PARAMETER 2 RESULT DS CL9 RESULT F_IND1 DS H INDICATOR FOR PARAMETER 1 F_IND2 DS H INDICATOR FOR PARAMETER 2 F_INDR DS H INDICATOR FOR RESULT PROGSIZE EQU *-PROGAREA CEEDSA , CEECAA , END MYMAIN MAPPING OF THE DYNAMIC SAVE AREA MAPPING OF THE COMMON ANCHOR AREA
C or C++: For C or C++ user-defined functions, the conventions for passing parameters are different for main programs and subprograms. For subprograms, you pass the parameters directly. For main programs, you use the standard argc and argv variables to access the input and output parameters:
334
v The argv variable contains an array of pointers to the parameters that are passed to the user-defined function. All string parameters that are passed back to DB2 must be null terminated. argv[0] contains the address of the load module name for the user-defined function. argv[1] through argv[n] contain the addresses of parameters 1 through n. v The argc variable contains the number of parameters that are passed to the external user-defined function, including argv[0]. Figure 126 shows the parameter conventions for a user-defined scalar function that is written as a main program that receives two parameters and returns one result.
#include <stdlib.h> #include <stdio.h> main(argc,argv) int argc; char *argv[]; { /***************************************************/ /* Assume that the user-defined function invocation*/ /* included 2 input parameters in the parameter */ /* list. Also assume that the definition includes */ /* the SCRATCHPAD, FINAL CALL, and DBINFO options, */ /* so DB2 passes the scratchpad, calltype, and */ /* dbinfo parameters. */ /* The argv vector contains these entries: */ /* argv[0] 1 load module name */ /* argv[1-2] 2 input parms */ /* argv[3] 1 result parm */ /* argv[4-5] 2 null indicators */ /* argv[6] 1 result null indicator */ /* argv[7] 1 SQLSTATE variable */ /* argv[8] 1 qualified func name */ /* argv[9] 1 specific func name */ /* argv[10] 1 diagnostic string */ /* argv[11] 1 scratchpad */ /* argv[12] 1 call type */ /* argv[13] + 1 dbinfo */ /* -----*/ /* 14 for the argc variable */ /***************************************************/ if argc<>14 { . . . /**********************************************************/ /* This section would contain the code executed if the */ /* user-defined function is invoked with the wrong number */ /* of parameters. */ /**********************************************************/ } Figure 126. How a C or C++ user-defined function that is written as a main program receives parameters (Part 1 of 2)
335
/***************************************************/ /* Assume the first parameter is an integer. */ /* The following code shows how to copy the integer*/ /* parameter into the application storage. */ /***************************************************/ int parm1; parm1 = *(int *) argv[1]; /***************************************************/ /* Access the null indicator for the first */ /* parameter on the invoked user-defined function */ /* as follows: */ /***************************************************/ short int ind1; ind1 = *(short int *) argv[4]; /***************************************************/ /* Use the following expression to assign */ /* xxxxx to the SQLSTATE returned to caller on */ /* the SQL statement that contains the invoked */ /* user-defined function. */ /***************************************************/ strcpy(argv[7],"xxxxx/0"); /***************************************************/ /* Obtain the value of the qualified function */ /* name with this expression. */ /***************************************************/ char f_func[28]; strcpy(f_func,argv[8]); /***************************************************/ /* Obtain the value of the specific function */ /* name with this expression. */ /***************************************************/ char f_spec[19]; strcpy(f_spec,argv[9]); /***************************************************/ /* Use the following expression to assign */ /* yyyyyyyy to the diagnostic string returned */ /* in the SQLCA associated with the invoked */ /* user-defined function. */ /***************************************************/ strcpy(argv[10],"yyyyyyyy/0"); /***************************************************/ /* Use the following expression to assign the */ /* result of the function. */ /***************************************************/ char l_result[11]; strcpy(argv[3],l_result); . . . } Figure 126. How a C or C++ user-defined function that is written as a main program receives parameters (Part 2 of 2)
Figure 127 on page 337 shows the parameter conventions for a user-defined scalar function written as a C subprogram that receives two parameters and returns one result.
336
#pragma runopts(plist(os)) #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sqludf.h> void myfunc(long *parm1, char parm2[11], char result[11], short *f_ind1, short *f_ind2, short *f_indr, char udf_sqlstate[6], char udf_fname[138], char udf_specname[129], char udf_msgtext[71], struct sqludf_scratchpad *udf_scratchpad, long *udf_call_type, struct sql_dbinfo *udf_dbinfo); { /***************************************************/ /* Declare local copies of parameters */ /***************************************************/ int l_p1; char l_p2[11]; short int l_ind1; short int l_ind2; char ludf_sqlstate[6]; /* SQLSTATE */ char ludf_fname[138]; /* function name */ char ludf_specname[129]; /* specific function name */ char ludf_msgtext[71] /* diagnostic message text*/ sqludf_scratchpad *ludf_scratchpad; /* scratchpad */ long *ludf_call_type; /* call type */ sqludf_dbinfo *ludf_dbinfo /* dbinfo */ /***************************************************/ /* Copy each of the parameters in the parameter */ /* list into a local variable to demonstrate */ /* how the parameters can be referenced. */ /***************************************************/ l_p1 = *parm1; strcpy(l_p2,parm2); l_ind1 = *f_ind1; l_ind1 = *f_ind2; strcpy(ludf_sqlstate,udf_sqlstate); strcpy(ludf_fname,udf_fname); strcpy(ludf_specname,udf_specname); l_udf_call_type = *udf_call_type; strcpy(ludf_msgtext,udf_msgtext); memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad)); memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo)); . . . } Figure 127. How a C language user-defined function that is written as a subprogram receives parameters
Figure 128 on page 338 shows the parameter conventions for a user-defined scalar function that is written as a C++ subprogram that receives two parameters and returns one result. This example demonstrates that you must use an extern C modifier to indicate that you want the C++ subprogram to receive parameters according to the C linkage convention. This modifier is necessary because the CEEPIPI CALL_SUB interface, which DB2 uses to call the user-defined function, passes parameters using the C linkage convention.
337
#pragma runopts(plist(os)) #include <stdlib.h> #include <stdio.h> #include <sqludf.h> extern "C" void myfunc(long *parm1, char parm2[11], char result[11], short *f_ind1, short *f_ind2, short *f_indr, char udf_sqlstate[6], char udf_fname[138], char udf_specname[129], char udf_msgtext[71], struct sqludf_scratchpad *udf_scratchpad, long *udf_call_type, struct sql_dbinfo *udf_dbinfo); { /***************************************************/ /* Define local copies of parameters. */ /***************************************************/ int l_p1; char l_p2[11]; short int l_ind1; short int l_ind2; char ludf_sqlstate[6]; /* SQLSTATE */ char ludf_fname[138]; /* function name */ char ludf_specname[129]; /* specific function name */ char ludf_msgtext[71] /* diagnostic message text*/ sqludf_scratchpad *ludf_scratchpad; /* scratchpad */ long *ludf_call_type; /* call type */ sqludf_dbinfo *ludf_dbinfo /* dbinfo */ /***************************************************/ /* Copy each of the parameters in the parameter */ /* list into a local variable to demonstrate */ /* how the parameters can be referenced. */ /***************************************************/ l_p1 = *parm1; strcpy(l_p2,parm2); l_ind1 = *f_ind1; l_ind1 = *f_ind2; strcpy(ludf_sqlstate,udf_sqlstate); strcpy(ludf_fname,udf_fname); strcpy(ludf_specname,udf_specname); l_udf_call_type = *udf_call_type; strcpy(ludf_msgtext,udf_msgtext); memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad)); memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo)); . . . } Figure 128. How a C++ user-defined function that is written as a subprogram receives parameters
COBOL: Figure 129 on page 339 shows the parameter conventions for a user-defined table function that is written as a main program that receives two parameters and returns two results. For a COBOL user-defined function that is a subprogram, the conventions are the same.
338
CBL APOST,RES,RENT IDENTIFICATION DIVISION. . . . DATA DIVISION. . . . LINKAGE SECTION. ********************************************************* * Declare each of the parameters * ********************************************************* 01 UDFPARM1 PIC S9(9) USAGE COMP. 01 UDFPARM2 PIC X(10). . . . ********************************************************* * Declare these variables for result parameters * ********************************************************* 01 UDFRESULT1 PIC X(10). 01 UDFRESULT2 PIC X(10). . . . ********************************************************* * Declare a null indicator for each parameter * ********************************************************* 01 UDF-IND1 PIC S9(4) USAGE COMP. 01 UDF-IND2 PIC S9(4) USAGE COMP. . . . ********************************************************* * Declare a null indicator for result parameter * ********************************************************* 01 UDF-RIND1 PIC S9(4) USAGE COMP. 01 UDF-RIND2 PIC S9(4) USAGE COMP. . . . ********************************************************* * Declare the SQLSTATE that can be set by the * * user-defined function * ********************************************************* 01 UDF-SQLSTATE PIC X(5). ********************************************************* * Declare the qualified function name * ********************************************************* 01 UDF-FUNC. 49 UDF-FUNC-LEN PIC 9(4) USAGE BINARY. 49 UDF-FUNC-TEXT PIC X(137). ********************************************************* * Declare the specific function name * ********************************************************* 01 UDF-SPEC. 49 UDF-SPEC-LEN PIC 9(4) USAGE BINARY. 49 UDF-SPEC-TEXT PIC X(128). ********************************************************* * Declare SQL diagnostic message token * ********************************************************* 01 UDF-DIAG. 49 UDF-DIAG-LEN PIC 9(4) USAGE BINARY. 49 UDF-DIAG-TEXT PIC X(70). Figure 129. How a COBOL user-defined function receives parameters (Part 1 of 3)
339
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
********************************************************* * Declare the scratchpad * ********************************************************* 01 UDF-SCRATCHPAD. 49 UDF-SPAD-LEN PIC 9(9) USAGE BINARY. 49 UDF-SPAD-TEXT PIC X(100). ********************************************************* * Declare the call type * ********************************************************* 01 UDF-CALL-TYPE PIC 9(9) USAGE BINARY. ********************************************************* * CONSTANTS FOR DB2-EBCODING-SCHEME. * ********************************************************* 77 SQLUDF-ASCII PIC 9(9) VALUE 1. 77 SQLUDF-EBCDIC PIC 9(9) VALUE 2. 77 SQLUDF-UNICODE PIC 9(9) VALUE 3. ********************************************************* * Structure used for DBINFO * ********************************************************* 01 SQLUDF-DBINFO. * location name length 05 DBNAMELEN PIC 9(4) USAGE BINARY. * location name 05 DBNAME PIC X(128). * authorization ID length 05 AUTHIDLEN PIC 9(4) USAGE BINARY. * authorization ID 05 AUTHID PIC X(128). * environment CCSID information 05 CODEPG PIC X(48). 05 CDPG-DB2 REDEFINES CODEPG. 10 DB2-CCSIDS OCCURS 3 TIMES. 15 DB2-SBCS PIC 9(9) USAGE BINARY. 15 DB2-DBCS PIC 9(9) USAGE BINARY. 15 DB2-MIXED PIC 9(9) USAGE BINARY. 10 ENCODING-SCHEME PIC 9(9) USAGE BINARY. 10 RESERVED PIC X(8). * other platform-specific deprecated CCSID structures not included here * schema name length 05 TBSCHEMALEN PIC 9(4) USAGE BINARY. * schema name 05 TBSCHEMA PIC X(128). * table name length 05 TBNAMELEN PIC 9(4) USAGE BINARY. * table name 05 TBNAME PIC X(128). * column name length 05 COLNAMELEN PIC 9(4) USAGE BINARY. * column name 05 COLNAME PIC X(128). * product information 05 VER-REL PIC X(8). * reserved for expansion 05 RESD0 PIC X(2). * platform type 05 PLATFORM PIC 9(9) USAGE BINARY. * number of entries in tfcolumn list array (tfcolumn, below) 05 NUMTFCOL PIC 9(4) USAGE BINARY. Figure 129. How a COBOL user-defined function receives parameters (Part 2 of 3)
340
| | | | | | | | | |
* * * * *
reserved for expansion 05 RESD1 PIC X(26). tfcolumn will be allocated dynamically if TF is defined otherwise this will be a null pointer 05 TFCOLUMN USAGE IS POINTER. Application identifier 05 APPL-ID USAGE IS POINTER. reserved for expansion 05 RESD2 PIC X(20).
* PROCEDURE DIVISION USING UDFPARM1, UDFPARM2, UDFRESULT1, UDFRESULT2, UDF-IND1, UDF-IND2, UDF-RIND1, UDF-RIND2, UDF-SQLSTATE, UDF-FUNC, UDF-SPEC, UDF-DIAG, UDF-SCRATCHPAD, UDF-CALL-TYPE, SQLUDF-DBINFO. Figure 129. How a COBOL user-defined function receives parameters (Part 3 of 3)
PL/I: Figure 130 shows the parameter conventions for a user-defined scalar function that is written as a main program that receives two parameters and returns one result. For a PL/I user-defined function that is a subprogram, the conventions are the same.
*PROCESS SYSTEM(MVS); MYMAIN: PROC(UDF_PARM1, UDF_PARM2, UDF_RESULT, UDF_IND1, UDF_IND2, UDF_INDR, UDF_SQLSTATE, UDF_NAME, UDF_SPEC_NAME, UDF_DIAG_MSG, UDF_SCRATCHPAD, UDF_CALL_TYPE, UDF_DBINFO) OPTIONS(MAIN NOEXECOPS REENTRANT); DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL UDF_PARM1 BIN FIXED(31); /* first parameter UDF_PARM2 CHAR(10); /* second parameter UDF_RESULT CHAR(10); /* result parameter UDF_IND1 BIN FIXED(15); /* indicator for 1st parm UDF_IND2 BIN FIXED(15); /* indicator for 2nd parm UDF_INDR BIN FIXED(15); /* indicator for result UDF_SQLSTATE CHAR(5); /* SQLSTATE returned to DB2 UDF_NAME CHAR(137) VARYING; /* Qualified function name UDF_SPEC_NAME CHAR(128) VARYING; /* Specific function name UDF_DIAG_MSG CHAR(70) VARYING; /* Diagnostic string 01 UDF_SCRATCHPAD /* Scratchpad 03 UDF_SPAD_LEN BIN FIXED(31), 03 UDF_SPAD_TEXT CHAR(100); DCL UDF_CALL_TYPE BIN FIXED(31); /* Call Type DCL DBINFO PTR; /* CONSTANTS FOR DB2_ENCODING_SCHEME */ DCL SQLUDF_ASCII BIN FIXED(15) INIT(1); DCL SQLUDF_EBCDIC BIN FIXED(15) INIT(2); DCL SQLUDF_MIXED BIN FIXED(15) INIT(3); */ */ */ */ */ */ */ */ */ */ */ */
341
| |
DCL 01 UDF_DBINFO BASED(DBINFO), /* Dbinfo 03 UDF_DBINFO_LLEN BIN FIXED(15), /* location length 03 UDF_DBINFO_LOC CHAR(128), /* location name 03 UDF_DBINFO_ALEN BIN FIXED(15), /* auth ID length 03 UDF_DBINFO_AUTH CHAR(128), /* authorization ID 03 UDF_DBINFO_CDPG, /* environment CCSID info 05 DB2_CCSIDS(3), 07 R1 BIN FIXED(15), /* Reserved 07 DB2_SBCS BIN FIXED(15), /* SBCS CCSID 07 R2 BIN FIXED(15), /* Reserved 07 DB2_DBCS BIN FIXED(15), /* DBCS CCSID 07 R3 BIN FIXED(15), /* Reserved 07 DB2_MIXED BIN FIXED(15), /* MIXED CCSID 05 DB2_ENCODING_SCHEME BIN FIXED(31), 05 DB2_CCSID_RESERVED CHAR(8), 03 UDF_DBINFO_SLEN BIN FIXED(15), /* schema length 03 UDF_DBINFO_SCHEMA CHAR(128), /* schema name 03 UDF_DBINFO_TLEN BIN FIXED(15), /* table length 03 UDF_DBINFO_TABLE CHAR(128), /* table name 03 UDF_DBINFO_CLEN BIN FIXED(15), /* column length 03 UDF_DBINFO_COLUMN CHAR(128), /* column name 03 UDF_DBINFO_RELVER CHAR(8), /* DB2 release level 03 UDF_DBINFO_RESERV0 CHAR(2), /* reserved 03 UDF_DBINFO_PLATFORM BIN FIXED(31), /* database platform 03 UDF_DBINFO_NUMTFCOL BIN FIXED(15), /* # of TF columns used 03 UDF_DBINFO_RESERV1 CHAR(26), /* reserved 03 UDF_DBINFO_TFCOLUMN PTR, /* -> TFcolumn list 03 UDF_DBINFO_APPLID PTR, /* -> application id 03 UDF_DBINFO_RESERV2 CHAR(20); /* reserved . . . Figure 130. How a PL/I user-defined function receives parameters (Part 2 of 2)
*/ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
Special register
342
Table 41. Characteristics of special registers in a user-defined function (continued) Initial value when INHERIT SPECIAL REGISTERS option is specified New value for each SQL statement in the user-defined function package2 Inherited from invoking application3 Initial value when DEFAULT SPECIAL REGISTERS option is specified New value for each SQL statement in the user-defined function package2 The value of field CURRENT DEGREE on installation panel DSNTIP8 The value of field CURRENT DEGREE on installation panel DSNTIP8 The value of field CURRENT MAINT TYPES on installation panel DSNTIP8 New value for each SET host-variable=CURRENT MEMBER statement The value of bind option OPTHINT for the user-defined function package Inherited from invoking application4 Inherited from invoking application9 The value of bind option PATH for the user-defined function package The value of field DECIMAL ARITHMETIC on installation panel DSNTIP4 The value of field CURRENT REFRESH AGE on installation panel DSNTIP8 The value of bind option SQLRULES for the user-defined function package The value of CURRENT SQLID when the user defined function is entered Inherited from invoking application Function can use SET to modify? Not applicable5
CURRENT DEGREE
Yes
|
CURRENT LOCALE LC_CTYPE Inherited from invoking application
Yes
| | | |
CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION CURRENT MEMBER Inherited from invoking application New value for each SET host-variable=CURRENT MEMBER statement The value of bind option OPTHINT for the user-defined function package or inherited from invoking application6 Inherited from invoking application4 Inherited from invoking application9 The value of bind option PATH for the user-defined function package or inherited from invoking application6 Inherited from invoking application Inherited from invoking application Inherited from invoking application Inherited from invoking application Inherited from invoking application
Yes
No
Yes
CURRENT PACKAGESET
CURRENT PRECISION
Yes
Yes
CURRENT RULES
Yes
CURRENT SCHEMA
Yes
Yes
The primary authorization ID of The primary authorization ID of Yes8 the application process or the application process inherited from invoking application7 New value for each SQL statement in the user-defined function package2 New value for each SQL statement in the user-defined function package2 Not applicable5
CURRENT TIME
343
Table 41. Characteristics of special registers in a user-defined function (continued) Initial value when INHERIT SPECIAL REGISTERS option is specified New value for each SQL statement in the user-defined function package2 Inherited from invoking application Inherited from invoking application Primary authorization ID of the application process Initial value when DEFAULT SPECIAL REGISTERS option is specified New value for each SQL statement in the user-defined function package2 Inherited from invoking application A string of 0 length Primary authorization ID of the application process Function can use SET to modify? Not applicable5
CURRENT TIMEZONE
| ENCRYPTION PASSWORD
USER
Notes: 1. If the ENCODING bind option is not specified, the initial value is the value that was specified in field APPLICATION ENCODING of installation panel DSNTIPF. 2. If the user-defined function is invoked within the scope of a trigger, DB2 uses the timestamp for the triggering SQL statement as the timestamp for all SQL statements in the function package. 3. DB2 allows parallelism at only one level of a nested SQL statement. If you set the value of the CURRENT DEGREE special register to ANY, and parallelism is disabled, DB2 ignores the CURRENT DEGREE value. 4. If the user-defined function definer specifies a value for COLLID in the CREATE FUNCTION statement, DB2 sets CURRENT PACKAGESET to the value of COLLID. 5. Not applicable because no SET statement exists for the special register. 6. If a program within the scope of the invoking application issues a SET statement for the special register before the user-defined function is invoked, the special register inherits the value from the SET statement. Otherwise, the special register contains the value that is set by the bind option for the user-defined function package. 7. If a program within the scope of the invoking application issues a SET CURRENT SQLID statement before the user-defined function is invoked, the special register inherits the value from the SET statement. Otherwise, CURRENT SQLID contains the authorization ID of the application process. 8. If the user-defined function package uses a value other than RUN for the DYNAMICRULES bind option, the SET CURRENT SQLID statement can be executed but does not affect the authorization ID that is used for the dynamic SQL statements in the user-defined function package. The DYNAMICRULES value determines the authorization ID that is used for dynamic SQL statements. See Using DYNAMICRULES to specify behavior of dynamic SQL statements on page 502 for more information about DYNAMICRULES values and authorization IDs. 9. If the user-defined function definer specifies a value for COLLID in the CREATE FUNCTION statement, DB2 sets CURRENT PACKAGE PATH to an empty string.
344
Figure 131 demonstrates how to enter information in a scratchpad for a user-defined function defined like this:
CREATE FUNCTION COUNTER() RETURNS INT SCRATCHPAD FENCED NOT DETERMINISTIC NO SQL NO EXTERNAL ACTION LANGUAGE C PARAMETER STYLE SQL EXTERNAL NAME UDFCTR;
The scratchpad length is not specified, so the scratchpad has the default length of 100 bytes, plus 4 bytes for the length field. The user-defined function increments an integer value and stores it in the scratchpad on each execution.
#pragma linkage(ctr,fetchable) #include <stdlib.h> #include <stdio.h> /* Structure scr defines the passed scratchpad for function ctr */ struct scr { long len; long countr; char not_used[96]; }; /***************************************************************/ /* Function ctr: Increments a counter and reports the value */ /* from the scratchpad. */ /* */ /* Input: None */ /* Output: INTEGER out the value from the scratchpad */ /***************************************************************/ void ctr( long *out, /* Output answer (counter) */ short *outnull, /* Output null indicator */ char *sqlstate, /* SQLSTATE */ char *funcname, /* Function name */ char *specname, /* Specific function name */ char *mesgtext, /* Message text insert */ struct scr *scratchptr) /* Scratchpad */ { *out = ++scratchptr->countr; /* Increment counter and */ /* copy to output variable */ *outnull = 0; /* Set output null indicator*/ return; } /* end of user-defined function ctr */ Figure 131. Example of coding a scratchpad in a user-defined function
345
The five basic steps to accessing transition tables in a user-defined function are: 1. Declare input parameters to receive table locators. You must define each parameter that receives a table locator as an unsigned 4-byte integer. 2. Declare table locators. You can declare table locators in assembler, C, C++, COBOL, PL/I, and in an SQL procedure compound statement. The syntax for declaring table locators in C, C++, COBOL, and PL/I is described in Chapter 9, Embedding SQL statements in host languages, on page 143. The syntax for declaring table locators in an SQL procedure is described in Chapter 6 of DB2 SQL Reference. 3. Declare a cursor to access the rows in each transition table. 4. Assign the input parameter values to the table locators. 5. Access rows from the transition tables using the cursors that are declared for the transition tables. The following examples show how a user-defined function that is written in C, C++, COBOL, or PL/I accesses a transition table for a trigger. The transition table, NEWEMP, contains modified rows of the employee sample table. The trigger is defined like this:
CREATE TRIGGER EMPRAISE AFTER UPDATE ON EMP REFERENCING NEW TABLE AS NEWEMPS FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC VALUES (CHECKEMP(TABLE NEWEMPS)); END;
Assembler: Figure 132 on page 347 shows how an assembler program accesses rows of transition table NEWEMPS.
346
CHECKEMP CSECT SAVE (14,12) LR R12,R15 USING CHECKEMP,R12 LR R7,R1 USING PARMAREA,R7 USING SQLDSECT,R8 L R6,PROGSIZE GETMAIN R,LV=(6) LR R10,R1 LR R2,R10 LR R3,R6 SR R4,R4 SR R5,R5 MVCL R2,R4 ST R13,FOUR(R10) ST R10,EIGHT(R13) LR R13,R10 USING PROGAREA,R13 ST R6,GETLENTH
ANY SAVE SEQUENCE CODE ADDRESSABILITY TELL THE ASSEMBLER SAVE THE PARM POINTER SET ADDRESSABILITY FOR PARMS ESTABLISH ADDRESSIBILITY TO SQLDSECT GET SPACE FOR USER PROGRAM GET STORAGE FOR PROGRAM VARIABLES POINT TO THE ACQUIRED STORAGE POINT TO THE FIELD GET ITS LENGTH CLEAR THE INPUT ADDRESS CLEAR THE INPUT LENGTH CLEAR OUT THE FIELD CHAIN THE SAVEAREA PTRS CHAIN SAVEAREA FORWARD POINT TO THE SAVEAREA SET ADDRESSABILITY SAVE THE LENGTH OF THE GETMAIN
. . . ************************************************************ * Declare table locator host variable TRIGTBL * ************************************************************ TRIGTBL SQL TYPE IS TABLE LIKE EMP AS LOCATOR ************************************************************ * Declare a cursor to retrieve rows from the transition * * table * ************************************************************ EXEC SQL DECLARE C1 CURSOR FOR SELECT LASTNAME FROM TABLE(:TRIGTBL LIKE EMP) WHERE SALARY > 100000 ************************************************************ * Copy table locator for trigger transition table * ************************************************************ L R2,TABLOC GET ADDRESS OF LOCATOR L R2,0(0,R2) GET LOCATOR VALUE ST R2,TRIGTBL EXEC SQL OPEN C1 EXEC SQL FETCH C1 INTO :NAME . . . . . . EXEC SQL CLOSE C1
X X
Figure 132. How an assembler user-defined function accesses a transition table (Part 1 of 2)
WORKING STORAGE FOR THE PROGRAM THIS ROUTINES SAVE AREA GETMAIN LENGTH FOR THIS AREA
Figure 132. How an assembler user-defined function accesses a transition table (Part 2 of 2)
347
C or C++: Figure 133 shows how a C or C++ program accesses rows of transition table NEWEMPS.
int CHECK_EMP(int trig_tbl_id) { . . . /**********************************************************/ /* Declare table locator host variable trig_tbl_id */ /**********************************************************/ EXEC SQL BEGIN DECLARE SECTION; SQL TYPE IS TABLE LIKE EMP AS LOCATOR trig_tbl_id; char name[25]; EXEC SQL END DECLARE SECTION; . . . /**********************************************************/ /* Declare a cursor to retrieve rows from the transition */ /* table */ /**********************************************************/ EXEC SQL DECLARE C1 CURSOR FOR SELECT NAME FROM TABLE(:trig_tbl_id LIKE EMPLOYEE) WHERE SALARY > 100000; /**********************************************************/ /* Fetch a row from transition table */ /**********************************************************/ EXEC SQL OPEN C1; EXEC SQL FETCH C1 INTO :name; . . . EXEC SQL CLOSE C1; . . . } Figure 133. How a C or C++ user-defined function accesses a transition table
COBOL: Figure 134 on page 349 shows how a COBOL program accesses rows of transition table NEWEMPS.
348
IDENTIFICATION DIVISION. PROGRAM-ID. CHECKEMP. ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. DATA DIVISION. WORKING-STORAGE SECTION. 01 NAME PIC X(24). . . . LINKAGE SECTION. ********************************************************* * Declare table locator host variable TRIG-TBL-ID * ********************************************************* 01 TRIG-TBL-ID SQL TYPE IS TABLE LIKE EMP AS LOCATOR. . . . PROCEDURE DIVISION USING TRIG-TBL-ID. . . . ********************************************************* * Declare cursor to retrieve rows from transition table * ********************************************************* EXEC SQL DECLARE C1 CURSOR FOR SELECT NAME FROM TABLE(:TRIG-TBL-ID LIKE EMP) WHERE SALARY > 100000 END-EXEC. ********************************************************* * Fetch a row from transition table * ********************************************************* EXEC SQL OPEN C1 END-EXEC. EXEC SQL FETCH C1 INTO :NAME END-EXEC. . . . EXEC SQL CLOSE C1 END-EXEC. . . . PROG-END. GOBACK. Figure 134. How a COBOL user-defined function accesses a transition table
PL/I: Figure 135 on page 350 shows how a PL/I program accesses rows of transition table NEWEMPS.
349
CHECK_EMP: PROC(TRIG_TBL_ID) RETURNS(BIN FIXED(31)) OPTIONS(MAIN NOEXECOPS REENTRANT); /****************************************************/ /* Declare table locator host variable TRIG_TBL_ID */ /****************************************************/ DECLARE TRIG_TBL_ID SQL TYPE IS TABLE LIKE EMP AS LOCATOR; DECLARE NAME CHAR(24); . . . /****************************************************/ /* Declare a cursor to retrieve rows from the */ /* transition table */ /****************************************************/ EXEC SQL DECLARE C1 CURSOR FOR SELECT NAME FROM TABLE(:TRIG_TBL_ID LIKE EMP) WHERE SALARY > 100000; /****************************************************/ /* Retrieve rows from the transition table */ /****************************************************/ EXEC SQL OPEN C1; EXEC SQL FETCH C1 INTO :NAME; . . . EXEC SQL CLOSE C1; . . . END CHECK_EMP; Figure 135. How a PL/I user-defined function accesses a transition table
350
351
Causes all programs contained in the external user-defined function to execute with AMODE(31) and RMODE(ANY) The definer can list these options as values of the RUN OPTIONS parameter of CREATE FUNCTION, or the system administrator can establish these options as defaults during Language Environment installation. For example, the RUN OPTIONS option parameter could contain:
H(,,ANY),STAC(,,ANY,),STO(,,,4K),BE(4K,,),LIBS(4K,,),ALL31(ON)
v Ask the system administrator to set the NUMTCB parameter for WLM-established stored procedures address spaces to a value greater than 1. This lets more than one TCB run in an address space. Be aware that setting NUMTCB to a value greater than 1 also reduces your level of application program isolation. For example, a bad pointer in one application can overwrite memory that is allocated by another application.
352
Using the Debug Tool interactively: To test a user-defined function interactively using the Debug Tool, you must have the Debug Tool installed on the z/OS system where the user-defined function runs. To debug your user-defined function using the Debug Tool, do the following: 1. Compile the user-defined function with the TEST option. This places information in the program that the Debug Tool uses. 2. Invoke the Debug Tool. One way to do that is to specify the Language Environment run-time TEST option. The TEST option controls when and how the Debug Tool is invoked. The most convenient place to specify run-time options is with the RUN OPTIONS parameter of CREATE FUNCTION or ALTER FUNCTION. See Components of a user-defined function definition on page 314 for more information about the RUN OPTIONS parameter. For example, suppose that you code this option:
TEST(ALL,*,PROMPT,JBJONES%SESSNA:)
The parameter values cause the following things to happen: ALL The Debug Tool gains control when an attention interrupt, abend, or program or Language Environment condition of Severity 1 and above occurs. * Debug commands will be entered from the terminal.
PROMPT The Debug Tool is invoked immediately after Language Environment initialization. JBJONES%SESSNA: The Debug Tool initiates a session on a workstation identified to APPC as JBJONES with a session ID of SESSNA. 3. If you want to save the output from your debugging session, issue a command that names a log file. For example, the following command starts logging to a file on the workstation called dbgtool.log.
SET LOG ON FILE dbgtool.log;
This should be the first command that you enter from the terminal or include in your commands file. Using the Debug Tool in batch mode: To test your user-defined function in batch mode, you must have the Debug Tool installed on the z/OS system where the user-defined function runs. To debug your user-defined function in batch mode using the Debug Tool, do the following: 1. If you plan to use the Language Environment run-time TEST option to invoke the Debug Tool, compile the user-defined function with the TEST option. This places information in the program that the Debug Tool uses during a debugging session. 2. Allocate a log data set to receive the output from the Debug Tool. Put a DD statement for the log data set in the startup procedure for the stored procedures address space. 3. Enter commands in a data set that you want the Debug Tool to execute. Put a DD statement for that data set in the startup procedure for the stored procedures address space. To define the data set that contains the commands to the Debug Tool, specify its data set name or DD name in the TEST run-time option. For example, this option tells the Debug Tool to look for the commands in the data set that is associated with DD name TESTDD:
Chapter 15. Creating and using user-defined functions
353
TEST(ALL,TESTDD,PROMPT,*)
This command directs output from your debugging session to the log data set you defined in step 2. For example, if you defined a log data set with DD name INSPLOG in the start-up procedure for the stored procedures address space, the first command should be:
SET LOG ON FILE INSPLOG;
4. Invoke the Debug Tool. The following are two possible methods for invoking the Debug Tool: v Specify the Language Environment run-time TEST option. The most convenient place to do that is in the RUN OPTIONS parameter of CREATE FUNCTION or ALTER FUNCTION. v Put CEETEST calls in the user-defined function source code. If you use this approach for an existing user-defined function, you must compile, link-edit, and bind the user-defined function again. Then you must issue the STOP FUNCTION SPECIFIC and START FUNCTION SPECIFIC commands to reload the user-defined function. You can combine the Language Environment run-time TEST option with CEETEST calls. For example, you might want to use TEST to name the commands data set but use CEETEST calls to control when the Debug Tool takes control. You can combine the Language Environment run-time TEST option with CEETEST calls. For example, you might want to use TEST to name the commands data set but use CEETEST calls to control when the Debug Tool takes control. For more information about the Debug Tool, see Debug Tool User's Guide and Reference. Route debugging messages to SYSPRINT: You can include simple print statements in your user-defined function code that you route to SYSPRINT. Then use System Display and Search Facility (SDSF) to examine the SYSPRINT contents while the WLM-established stored procedure address space is running. You can serialize I/O by running the WLM-established stored procedure address space with NUMTCB=1. Driver applications: You can write a small driver application that calls the user-defined function as a subprogram and passes the parameter list for the user-defined function. You can then test and debug the user-defined function as a normal DB2 application under TSO. You can then use TSO TEST and other commonly used debugging tools. Using SQL INSERT statements: You can use SQL to insert debugging information into a DB2 table. This allows other machines in the network (such as a workstation) to easily access the data in the table using DRDA access. DB2 discards the debugging information if the application executes the ROLLBACK statement. To prevent the loss of the debugging data, code the calling application so that it retrieves the diagnostic data before executing the ROLLBACK statement.
354
Use the syntax shown in Figure 137 on page 356 when you invoke a table function:
355
) )
AS correlation-name , ( column-name )
See Chapter 2 of DB2 SQL Reference for more information about the syntax of user-defined function invocation.
356
Then DB2 chooses function SCHEMA2.X. If two or more candidates fit the unqualified function invocation equally well because the function invocation contains parameter markers, DB2 issues an error. The remainder of this section discusses details of the function resolution process and gives suggestions on how you can ensure that DB2 picks the right function.
357
of a query that contains a function invocation, a built-in function is a candidate for function resolution only if the release dependency marker of the built-in function is the same as or lower than the release dependency marker of the plan or package that contains the function invocation. To determine whether a data type is promotable to another data type, see Table 42. The first column lists data types in function invocations. The second column lists data types to which the types in the first column can be promoted, in order from best fit to worst fit. For example, suppose that in this statement, the data type of A is SMALLINT:
SELECT USER1.ADDTWO(A) FROM TABLEA;
Two instances of USER1.ADDTWO are defined: one with an input parameter of type INTEGER and one with an input parameter of type DECIMAL. Both function instances are candidates for execution because the SMALLINT type is promotable to either INTEGER or DECIMAL. However, the instance with the INTEGER type is a better fit because INTEGER is higher in the list than DECIMAL.
Table 42. Promotion of data types Data type in function invocation SMALLINT Possible fits (in best-to-worst order) SMALLINT INTEGER DECIMAL REAL DOUBLE INTEGER DECIMAL REAL DOUBLE DECIMAL REAL DOUBLE REAL DOUBLE DOUBLE CHAR or GRAPHIC VARCHAR or VARGRAPHIC CLOB or DBCLOB VARCHAR or VARGRAPHIC CLOB or DBCLOB CLOB or DBCLOB BLOB DATE TIME TIMESTAMP ROWID Distinct type with same name
INTEGER
DECIMAL
358
Table 42. Promotion of data types (continued) Data type in function invocation Notes: 1. This promotion also applies if the parameter type in the invocation is a LOB locator for a LOB with this data type. 2. The FLOAT type with a length of less than 22 is equivalent to REAL. 3. The FLOAT type with a length of greater than or equal to 22 is equivalent to DOUBLE. Possible fits (in best-to-worst order)
In user-defined function FUNC, VCHARCOL has data type VARCHAR, SMINTCOL has data type SMALLINT, and DECCOL has data type DECIMAL. Also suppose that two function instances with the following definitions meet the criteria in How DB2 chooses candidate functions on page 357 and are therefore candidates for execution.
Candidate 1: CREATE FUNCTION FUNC(VARCHAR(20),INTEGER,DOUBLE) RETURNS DECIMAL(9,2) EXTERNAL NAME FUNC1 PARAMETER STYLE SQL LANGUAGE COBOL; Candidate 2: CREATE FUNCTION FUNC(VARCHAR(20),REAL,DOUBLE) RETURNS DECIMAL(9,2) EXTERNAL NAME FUNC2 PARAMETER STYLE SQL LANGUAGE COBOL;
359
DB2 compares the data type of the first parameter in the user-defined function invocation to the data types of the first parameters in the candidate functions. Because the first parameter in the invocation has data type VARCHAR, and both candidate functions also have data type VARCHAR, DB2 cannot determine the better candidate based on the first parameter. Therefore, DB2 compares the data types of the second parameters. The data type of the second parameter in the invocation is SMALLINT. INTEGER, which is the data type of candidate 1, is a better fit to SMALLINT than REAL, which is the data type of candidate 2. Therefore, candidate 1 is the DB2 choice for execution.
v Avoid defining user-defined function numeric parameters as SMALLINT or REAL. Use INTEGER or DOUBLE instead. An invocation of a user-defined function defined with parameters of type SMALLINT or REAL must use parameters of the same types. For example, if user-defined function FUNC is defined with a parameter of type SMALLINT, only an invocation with a parameter of type SMALLINT resolves correctly. An invocation like this does not resolve to FUNC because the constant 123 is of type INTEGER, not SMALLINT:
SELECT FUNC(123) FROM T1;
v Avoid defining user-defined function string parameters with fixed-length string types. If you define a parameter with a fixed-length string type (CHAR or GRAPHIC), you can invoke the user-defined function only with a fixed-length string parameter. However, if you define the parameter with a varying-length string type (VARCHAR or VARGRAPHIC), you can invoke the user-defined function with either a fixed-length string parameter or a varying-length string parameter. If you must define parameters for a user-defined function as CHAR, and you call the user-defined function from a C program or SQL procedure, you need to cast the corresponding parameter values in the user-defined function invocation to CHAR to ensure that DB2 invokes the correct function. For example, suppose that a C program calls user-defined function CVRTNUM, which takes one input parameter of type CHAR(6). Also suppose that you declare host variable empnumbr as char empnumbr[6]. When you invoke CVRTNUM, cast empnumbr to CHAR:
UPDATE EMP SET EMPNO=CVRTNUM(CHAR(:empnumbr)) WHERE EMPNO = :empnumbr;
360
Columns QUERYNO, QBLOCKNO, APPLNAME, PROGNAME, COLLID, and GROUP_MEMBER have the same meanings as in the PLAN_TABLE. See Chapter 27, Using EXPLAIN to improve SQL performance, on page 789 for explanations of those columns. The meanings of the other columns are: EXPLAIN_TIME Timestamp when the EXPLAIN statement was executed. SCHEMA_NAME Schema name of the function that is invoked in the explained statement. FUNCTION_NAME Name of the function that is invoked in the explained statement. SPEC_FUNC_NAME Specific name of the function that is invoked in the explained statement. FUNCTION_TYPE The type of function that is invoked in the explained statement. Possible values are: SU TU Scalar function Table function
VIEW_CREATOR The creator of the view, if the function that is specified in the FUNCTION_NAME column is referenced in a view definition. Otherwise, this field is blank.
361
VIEW_NAME The name of the view, if the function that is specified in the FUNCTION_NAME column is referenced in a view definition. Otherwise, this field is blank. PATH The value of the SQL path when DB2 resolved the function reference. FUNCTION_TEXT The text of the function reference (the function name and parameters). If the function reference exceeds 1500 bytes, this column contains the first 1500 bytes. For a function specified in infix notation, FUNCTION_TEXT contains only the function name. For example, suppose a user-defined function named / is in the function reference A/B. Then FUNCTION_TEXT contains only /, not A/B.
Sourced user-defined function TAXFN2, which is sourced on TAXFN1, is defined like this:
CREATE FUNCTION TAXFN2(DEC(8,2)) RETURNS DEC(5,0) SOURCE TAXFN1;
Now suppose that PRICE2 has the DECIMAL(9,2) value 0001234.56. DB2 must first assign this value to the data type of the input parameter in the definition of TAXFN2, which is DECIMAL(8,2). The input parameter value then becomes 001234.56. Next, DB2 casts the parameter value to a source function parameter,
362
which is DECIMAL(6,0). The parameter value then becomes 001234. (When you cast a value, that value is truncated, rather than rounded.) Now, if TAXFN1 returns the DECIMAL(5,2) value 123.45, DB2 casts the value to DECIMAL(5,0), which is the result type for TAXFN2, and the value becomes 00123. This is the value that DB2 assigns to column SALESTAX2 in the UPDATE statement. Casting of parameter markers: You can use untyped parameter markers in a function invocation. However, DB2 cannot compare the data types of untyped parameter markers to the data types of candidate functions. Therefore, DB2 might find more than one function that qualifies for invocation. If this happens, an SQL error occurs. To ensure that DB2 picks the right function to execute, cast the parameter markers in your function invocation to the data types of the parameters in the function that you want to execute. For example, suppose that two versions of function FX exist. One version of FX is defined with a parameter of type of DECIMAL(9,2), and the other is defined with a parameter of type INTEGER. You want to invoke FX with a parameter marker, and you want DB2 to execute the version of FX that has a DECIMAL(9,2) parameter. You need to cast the parameter marker to a DECIMAL(9,2) type:
SELECT FX(CAST(? AS DECIMAL(9,2))) FROM T1;
363
Trigger TR1 is defined on table T3: CREATE TRIGGER TR1 AFTER UPDATE ON T3 FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC CALL SP3(PARM1); END Program P1 (nesting level 1) contains: SELECT UDF1(C1) FROM T1; UDF1 (nesting level 2) contains: CALL SP2(C2); SP2 (nesting level 3) contains: UPDATE T3 SET C3=1; SP3 (nesting level 4) contains: .SELECT UDF4(C4) FROM T4; . . SP16 (nesting level 16) cannot invoke stored procedures or user-defined functions Figure 138. Nested SQL statements
DB2 has the following restrictions on nested SQL statements: v Restrictions for SELECT statements: When you execute a SELECT statement on a table, you cannot execute INSERT, UPDATE, or DELETE statements on the same table at a lower level of nesting. For example, suppose that you execute this SQL statement at level 1 of nesting:
SELECT UDF1(C1) FROM T1;
v Restrictions for INSERT, UPDATE, and DELETE statements: When you execute an INSERT, DELETE, or UPDATE statement on a table, you cannot access that table from a user-defined function or stored procedure that is at a lower level of nesting. For example, suppose that you execute this SQL statement at level 1 of nesting:
DELETE FROM T1 WHERE UDF3(T1.C1) = 3;
# # # # # # #
The preceding list of restrictions do not apply to SQL statements that are executed at a lower level of nesting as a result of an after trigger. For example, suppose an UPDATE statement at nesting level 1 activates an after update trigger, which calls a stored procedure. The stored procedure executes two SQL statements that reference the triggering table: one SELECT statement and one INSERT statement. In this situation, both the SELECT and the INSERT statements can be executed even though they are at nesting level 3. Although trigger activations count in the levels of SQL statement nesting, the previous restrictions on SQL statements do not apply to SQL statements that are executed in the trigger body. Example: Suppose that trigger TR1 is defined on table T1:
364
CREATE TRIGGER TR1 AFTER INSERT ON T1 FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC UPDATE T1 SET C1=1; END
Now suppose that you execute this SQL statement at level 1 of nesting:
INSERT INTO T1 VALUES(...);
Although the UPDATE statement in the trigger body is at level 2 of nesting and modifies the same table that the triggering statement updates, DB2 can execute the INSERT statement successfully.
COUNTER is a user-defined function that increments a variable in the scratchpad each time it is invoked. DB2 invokes an instance of COUNTER in the predicate 3 times. Assume that COUNTER is invoked for row 1 first, for row 2 second, and for row 3 third. Then COUNTER returns 1 for row 1, 2 for row 2, and 3 for row 3. Therefore, row 2 satisfies the predicate WHERE COUNTER()=2, so DB2 evaluates the SELECT list for row 2. DB2 uses a different instance of COUNTER in the select list from the instance in the predicate. Because the instance of COUNTER in the select list is invoked only once, it returns a value of 1. Therefore, the result of the query is:
COUNTER() --------1 C1 C2 -- -2 c
This is not the result you might expect. The results can differ even more, depending on the order in which DB2 retrieves the rows from the table. Suppose that an ascending index is defined on column C2. Then DB2 retrieves row 3 first, row 1 second, and row 2 third. This means that
Chapter 15. Creating and using user-defined functions
365
row 1 satisfies the predicate WHERE COUNTER()=2. The value of COUNTER in the select list is again 1, so the result of the query in this case is:
COUNTER() --------1 C1 C2 -- -1 b
Understand the interaction between scrollable cursors and nondeterministic user-defined functions or user-defined functions with external actions: When you use a scrollable cursor, you might retrieve the same row multiple times while the cursor is open. If the select list of the cursors SELECT statement contains a user-defined function, that user-defined function is executed each time you retrieve a row. Therefore, if the user-defined function has an external action, and you retrieve the same row multiple times, the external action is executed multiple times for that row. A similar situation occurs with scrollable cursors and nondeterministic functions. The result of a nondeterministic user-defined function can be different each time you execute the user-defined function. If the select list of a scrollable cursor contains a nondeterministic user-defined function, and you use that cursor to retrieve the same row multiple times, the results can differ each time you retrieve the row. A nondeterministic user-defined function in the predicate of a scrollable cursors SELECT statement does not change the result of the predicate while the cursor is open. DB2 evaluates a user-defined function in the predicate only once while the cursor is open.
366
For more information on LOB data, see Chapter 14, Programming for large objects, on page 297. After you define distinct types and columns of those types, you can use those data types in the same way you use built-in types. You can use the data types in assignments, comparisons, function invocations, and stored procedure calls. However, when you assign one column value to another or compare two column values, those values must be of the same distinct type. For example, you must assign a column value of type VIDEO to a column of type VIDEO, and you can compare a column value of type AUDIO only to a column of type AUDIO. When you assign a host variable value to a column with a distinct type, you can use any host data type that is compatible with the source data type of the distinct type. For example, to receive an AUDIO or VIDEO value, you can define a host variable like this:
SQL TYPE IS BLOB (1M) HVAV;
When you use a distinct type as an argument to a function, a version of that function that accepts that distinct type must exist. For example, if function SIZE takes a BLOB type as input, you cannot automatically use a value of type AUDIO as input. However, you can create a sourced user-defined function that takes the AUDIO type as input. For example:
367
If a conversion function is defined that takes an input parameter of type US_DOLLAR as input, DB2 returns an error if you try to execute the function with an input parameter of type JAPANESE_YEN.
368
SELECT PRODUCT_ITEM FROM US_SALES WHERE TOTAL > US_DOLLAR(100000.00) AND MONTH = 7 AND YEAR = 2003;
The casting satisfies the requirement that the compared data types are identical. You cannot use host variables in statements that you prepare for dynamic execution. As explained in Using parameter markers with PREPARE and EXECUTE on page 606, you can substitute parameter markers for host variables when you prepare a statement, and then use host variables when you execute the statement. If you use a parameter marker in a predicate of a query, and the column to which you compare the value represented by the parameter marker is of a distinct type, you must cast the parameter marker to the distinct type, or cast the column to its source type. For example, suppose that distinct type CNUM is defined like this: |
CREATE DISTINCT TYPE CNUM AS INTEGER;
In an application program, you prepare a SELECT statement that compares the CUST_NUM column to a parameter marker. Because CUST_NUM is of a distinct type, you must cast the distinct type to its source type:
SELECT FIRST_NAME, LAST_NAME, PHONE_NUM FROM CUSTOMER WHERE CAST(CUST_NUM AS INTEGER) = ?
Alternatively, you can cast the parameter marker to the distinct type:
SELECT FIRST_NAME, LAST_NAME, PHONE_NUM FROM CUSTOMER WHERE CUST_NUM = CAST (? AS CNUM)
369
You need to insert values from the TOTAL column in JAPAN_SALES into the TOTAL column of JAPAN_SALES_03. Because INSERT statements follow assignment rules, DB2 does not let you insert the values directly from one column to the other because the columns are of different distinct types. Suppose that a user-defined function called US_DOLLAR has been written that accepts values of type JAPANESE_YEN as input and returns values of type US_DOLLAR. You can then use this function to insert values into the JAPAN_SALES_03 table:
INSERT INTO JAPAN_SALES_03 SELECT PRODUCT_ITEM, US_DOLLAR(TOTAL) FROM JAPAN_SALES WHERE YEAR = 2003;
370
Because the result type of both US_DOLLAR functions is US_DOLLAR, you have satisfied the requirement that the distinct types of the combined columns are the same.
371
The HOUR function takes only the TIME or TIMESTAMP data type as an argument, so you need a sourced function that is based on the HOUR function that accepts the FLIGHT_TIME data type. You might declare a function like this:
CREATE FUNCTION HOUR(FLIGHT_TIME) RETURNS INTEGER SOURCE SYSIBM.HOUR(TIME);
Example: Casting function arguments to acceptable types: Another way you can invoke the HOUR function is to cast the argument of type FLIGHT_TIME to the TIME data type before you invoke the HOUR function. Suppose table FLIGHT_INFO contains column DEPARTURE_TIME, which has data type FLIGHT_TIME, and you want to use the HOUR function to extract the hour of departure from the departure time. You can cast DEPARTURE_TIME to the TIME data type, and then invoke the HOUR function:
SELECT HOUR(CAST(DEPARTURE_TIME AS TIME)) FROM FLIGHT_INFO;
Example: Using an infix operator with distinct type arguments: Suppose you want to add two values of type US_DOLLAR. Before you can do this, you must define a version of the + function that accepts values of type US_DOLLAR as operands:
CREATE FUNCTION "+"(US_DOLLAR,US_DOLLAR) RETURNS US_DOLLAR SOURCE SYSIBM."+"(DECIMAL(9,2),DECIMAL(9,2));
Because the US_DOLLAR type is based on the DECIMAL(9,2) type, the source function must be the version of + with arguments of type DECIMAL(9,2). Example: Casting constants and host variables to distinct types to invoke a user-defined function: Suppose function CDN_TO_US is defined like this:
CREATE FUNCTION EURO_TO_US(EURO) RETURNS US_DOLLAR EXTERNAL NAME CDNCVT PARAMETER STYLE SQL LANGUAGE C;
This means that EURO_TO_US accepts only the EURO type as input. Therefore, if you want to call CDN_TO_US with a constant or host variable argument, you must cast that argument to distinct type EURO:
SELECT * FROM US_SALES WHERE TOTAL = EURO_TO_US(EURO(:H1)); SELECT * FROM US_SALES WHERE TOTAL = EURO_TO_US(EURO(10000));
372
define it as a distinct type so that you can control the types of operations that are performed on the electronic mail. The distinct type is defined like this:
CREATE DISTINCT TYPE E_MAIL AS CLOB(5M);
You have also defined and written user-defined functions to search for and return the following information about an electronic mail document: v Subject v Sender v Date sent v Message content v Indicator of whether the document contains a user-specified string The user-defined function definitions look like this:
CREATE FUNCTION SUBJECT(E_MAIL) RETURNS VARCHAR(200) EXTERNAL NAME SUBJECT LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION; CREATE FUNCTION SENDER(E_MAIL) RETURNS VARCHAR(200) EXTERNAL NAME SENDER LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION; CREATE FUNCTION SENDING_DATE(E_MAIL) RETURNS DATE EXTERNAL NAME SENDDATE LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION; CREATE FUNCTION CONTENTS(E_MAIL) RETURNS CLOB(1M) EXTERNAL NAME CONTENTS LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION; CREATE FUNCTION CONTAINS(E_MAIL, VARCHAR (200)) RETURNS INTEGER EXTERNAL NAME CONTAINS LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION;
The table that contains the electronic mail documents is defined like this:
CREATE TABLE DOCUMENTS (LAST_UPDATE_TIME TIMESTAMP, DOC_ROWID ROWID NOT NULL GENERATED ALWAYS, A_DOCUMENT E_MAIL);
373
Because the table contains a column with a source data type of CLOB, the table requires an associated LOB table space, auxiliary table, and index on the auxiliary table. Use statements like this to define the LOB table space, the auxiliary table, and the index:
CREATE LOB TABLESPACE DOCTSLOB LOG YES GBPCACHE SYSTEM; CREATE AUX TABLE DOCAUX_TABLE IN DOCTSLOB STORES DOCUMENTS COLUMN A_DOCUMENT; CREATE INDEX A_IX_DOC ON DOCAUX_TABLE;
To populate the document table, you write code that executes an INSERT statement to put the first part of a document in the table, and then executes multiple UPDATE statements to concatenate the remaining parts of the document. For example:
EXEC SQL BEGIN DECLARE SECTION; char hv_current_time[26]; SQL TYPE IS CLOB (1M) hv_doc; EXEC SQL END DECLARE SECTION; /* Determine the current time and put this value */ /* into host variable hv_current_time. */ /* Read up to 1 MB of document data from a file */ /* into host variable hv_doc. */ . . . /* Insert the time value and the first 1 MB of */ /* document data into the table. */ EXEC SQL INSERT INTO DOCUMENTS VALUES(:hv_current_time, DEFAULT, E_MAIL(:hv_doc)); /* Although there is more document data in the /* file, read up to 1 MB more of data, and then /* use an UPDATE statement like this one to /* concatenate the data in the host variable /* to the existing data in the table. EXEC SQL UPDATE DOCUMENTS SET A_DOCUMENT = A_DOCUMENT || E_MAIL(:hv_doc) WHERE LAST_UPDATE_TIME = :hv_current_time; */ */ */ */ */
Now that the data is in the table, you can execute queries to learn more about the documents. For example, you can execute this query to determine which documents contain the word performance:
SELECT SENDER(A_DOCUMENT), SENDING_DATE(A_DOCUMENT), SUBJECT(A_DOCUMENT) FROM DOCUMENTS WHERE CONTAINS(A_DOCUMENT,performance) = 1;
Because the electronic mail documents can be very large, you might want to use LOB locators to manipulate the document data instead of fetching all of a document into a host variable. You can use a LOB locator on any distinct type that is defined on one of the LOB types. The following example shows how you can cast a LOB locator as a distinct type, and then use the result in a user-defined function that takes a distinct type as an argument:
EXEC SQL BEGIN DECLARE SECTION long hv_len; char hv_subject[200]; SQL TYPE IS CLOB_LOCATOR hv_email_locator; EXEC SQL END DECLARE SECTION
374
. . . /* Select a document into a CLOB locator. EXEC SQL SELECT A_DOCUMENT, SUBJECT(A_DOCUMENT) INTO :hv_email_locator, :hv_subject FROM DOCUMENTS . WHERE LAST_UPDATE_TIME = :hv_current_time; . . /* Extract the subject from the document. The /* SUBJECT function takes an argument of type /* E_MAIL, so cast the CLOB locator as E_MAIL. EXEC SQL SET :hv_subject = . SUBJECT(CAST(:hv_email_locator AS E_MAIL)); . .
*/
*/ */ */
375
376
377
Relationship between transaction locks and LOB locks Hierarchy of LOB locks . . . . . . . . . . . LOB and LOB table space lock modes . . . . . . Modes of LOB locks . . . . . . . . . . . Modes of LOB table space locks . . . . . . . LOB lock and LOB table space lock duration . . . . The duration of LOB locks . . . . . . . . . The duration of LOB table space locks . . . . . Instances when LOB table space locks are not taken . The LOCK TABLE statement for LOBs . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
425 426 426 427 427 427 427 428 428 428
Chapter 19. Planning for recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Unit of work in TSO batch and online . . . . . . . . . . . . . . . . . . . . . . . . . . 432 Unit of work in CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 Unit of work in IMS online programs . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Planning ahead for program recovery: Checkpoint and restart . . . . . . . . . . . . . . . . . 435 What symbolic checkpoint does . . . . . . . . . . . . . . . . . . . . . . . . . . 435 What restart does . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 When are checkpoints important? . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Checkpoints in MPPs and transaction-oriented BMPs . . . . . . . . . . . . . . . . . . . . 436 Checkpoints in batch-oriented BMPs . . . . . . . . . . . . . . . . . . . . . . . . . 437 Specifying checkpoint frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Unit of work in DL/I and IMS batch programs . . . . . . . . . . . . . . . . . . . . . . . 438 Commit and rollback coordination . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Using ROLL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Using ROLB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 In batch programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Restart and recovery in IMS batch . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Using savepoints to undo selected changes within a unit of work . . . . . . . . . . . . . . . . . 439 Chapter 20. Planning to access distributed data. . . . . . . . . . . . . . . . . . . . . . 441 Planning for DRDA and DB2 private protocol access . . . . . . . . . . . . . . . . . . . . . 441 Advantages of DRDA access . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Moving from DB2 private protocol access to DRDA access . . . . . . . . . . . . . . . . . . 442 Bind processes for DRDA and DB2 private protocol access . . . . . . . . . . . . . . . . . . 444 Precompiler and bind options for DRDA access . . . . . . . . . . . . . . . . . . . . . . 445 Precompiler options for DRDA access . . . . . . . . . . . . . . . . . . . . . . . . 445 BIND PLAN options for DRDA access . . . . . . . . . . . . . . . . . . . . . . . . 445 BIND PACKAGE options for DRDA access . . . . . . . . . . . . . . . . . . . . . . 446 Checking BIND PACKAGE options . . . . . . . . . . . . . . . . . . . . . . . . . 447 Coding methods for distributed data . . . . . . . . . . . . . . . . . . . . . . . . . . 448 Using three-part table names to access distributed data . . . . . . . . . . . . . . . . . . . 448 Three-part names and multiple servers . . . . . . . . . . . . . . . . . . . . . . . . 449 Accessing declared temporary tables by using three-part names . . . . . . . . . . . . . . . 449 Using explicit CONNECT statements to access distributed data . . . . . . . . . . . . . . . . 450 Using a location alias name for multiple sites. . . . . . . . . . . . . . . . . . . . . . 451 Releasing connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Coordinating updates to two or more data sources . . . . . . . . . . . . . . . . . . . . . . 452 Working without two-phase commit . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Update restrictions on servers that do not support two-phase commit . . . . . . . . . . . . . . 453 Forcing update restrictions by using CONNECT (Type 1) . . . . . . . . . . . . . . . . . . . 453 Maximizing performance for distributed data . . . . . . . . . . . . . . . . . . . . . . . 454 Coding efficient queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Maximizing LOB performance in a distributed environment . . . . . . . . . . . . . . . . . . 454 Using bind options to improve performance for distributed applications . . . . . . . . . . . . . 456 DEFER(PREPARE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 PKLIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 REOPT(ALWAYS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 CURRENTDATA(NO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 KEEPDYNAMIC(YES) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 DBPROTOCOL(DRDA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
378
| |
Using block fetch in distributed applications . . . . . . . . . . . . . . When DB2 uses block fetch for non-scrollable cursors . . . . . . . . . . When DB2 uses block fetch for scrollable cursors . . . . . . . . . . . Limiting the number of DRDA network transmissions . . . . . . . . . . . Limiting the number of rows returned to DRDA clients . . . . . . . . . . Fast implicit close and FETCH FIRST n ROWS ONLY . . . . . . . . . . Example of FETCH FIRST n ROWS ONLY . . . . . . . . . . . . . Limiting the number of rows with the rowset parameter . . . . . . . . . Working with distributed data . . . . . . . . . . . . . . . . . . . . SQL limitations at dissimilar servers . . . . . . . . . . . . . . . . . Executing long SQL statements in a distributed environment . . . . . . . . Retrieving data from ASCII or Unicode tables . . . . . . . . . . . . . Accessing data with a scrollable cursor when the requester is down-level . . . . Accessing data with a rowset-positioned cursor when the requester is down-level . Maintaining data currency by using cursors . . . . . . . . . . . . . . Copying a table from a remote location. . . . . . . . . . . . . . . . Transmitting mixed data. . . . . . . . . . . . . . . . . . . . . Converting mixed data . . . . . . . . . . . . . . . . . . . . Identifying the server at run time . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
458 459 459 461 464 464 464 465 465 465 466 466 467 467 467 467 467 467 468
379
380
| | | | |
381
# | | |
Figure 140 on page 383 illustrates the program preparation process when you use the DB2 coprocessor. The process is similar to the process used with the DB2 precompiler, except that the DB2 coprocessor does not create modified source for your application program.
382
383
Planning to bind
Depending on how you design your DB2 application, you might bind all your DBRMs in one operation, creating only a single application plan. Alternatively, you might bind some or all of your DBRMs into separate packages in separate operations. After that, you must still bind the entire application as a single plan, listing the included packages or collections and binding any DBRMs that are not already bound into packages. Regardless of what the plan contains, you must bind a plan before the application can run. Binding or rebinding a package or plan in use: Packages and plans are locked when you bind or run them. Packages that run under a plan are not locked until the plan uses them. If you run a plan and some packages in the package list never run, those packages are never locked. You cannot bind or rebind a package or a plan while it is running. However, you can bind a different version of a package that is running. Options for binding and rebinding: Several of the options of BIND PACKAGE and BIND PLAN can affect your program design. For example, you can use a bind option to ensure that a package or plan can run only from a particular CICS connection or a particular IMS regionyou do not need to enforce this in your code. Several other options are discussed at length in later chapters, particularly the ones that affect your programs use of locks, such as the ISOLATION option. Before you finish reading this chapter, you might want to review those options in Chapter 2 of DB2 Command Reference. Preliminary steps: Before you bind, consider the following: v Determine how you want to bind the DBRMs. You can bind them into packages or directly into plans, or you can use a combination of both methods. v Develop a naming convention and strategy for the most effective and efficient use of your plans and packages. v Determine when your application should acquire locks on the objects it uses: on all objects when the plan is first allocated, or on each object in turn when that object is first used. For a description of the consequences of these choices, see The ACQUIRE and RELEASE options on page 408.
384
Binding a plan that includes only a package list makes maintenance easier when the application changes significantly over time.
Advantages of packages
You must decide how to use packages based on your application design and your operational objectives. The following are advantages of using packages: Ease of maintenance: When you use packages, you do not need to bind the entire plan again when you change one SQL statement. You need to bind only the package that is associated with the changed SQL statement. Incremental development of your program: Binding packages into package collections allows you to add packages to an existing application plan without having to bind the entire plan again. A collection is a group of associated packages. If you include a collection name in the package list when you bind a plan, any package in the collection becomes available to the plan. The collection can even be empty when you first bind the plan. Later, you can add packages to the collection, and drop or replace existing packages, without binding the plan again. Versioning: Maintaining several versions of a plan without using packages requires a separate plan for each version, and therefore separate plan names and RUN commands. Isolating separate versions of a program into packages requires only one plan and helps to simplify program migration and fallback. For example, you can maintain separate development, test, and production levels of a program by binding each level of the program as a separate version of a package, all within a single plan. Flexibility in using bind options: The options of BIND PLAN apply to all DBRMs that are bound directly to the plan. The options of BIND PACKAGE apply only to the single DBRM that is bound to that package. The package options need not all be the same as the plan options, and they need not be the same as the options for other packages that are used by the same plan. Flexibility in using name qualifiers: You can use a bind option to name a qualifier for the unqualified object names in SQL statements in a plan or package. By using packages, you can use different qualifiers for SQL statements in different parts of your application. By rebinding, you can redirect your SQL statements, for example, from a test table to a production table.
385
CICS With packages, you probably do not need dynamic plan selection and its accompanying exit routine. A package that is listed within a plan is not accessed until it is executed. However, you can use dynamic plan selection and packages together, which can reduce the number of plans in an application and the effort to maintain the dynamic plan exit routine. See Using packages with dynamic plan selection on page 507 for information about using packages with dynamic plan selection.
386
Table 43. Changes requiring BIND or REBIND (continued) Change made Add an index to a table Change bind options Minimum action necessary Issue REBIND for the package or plan to use the index. Issue REBIND for the package or plan, or issue BIND with ACTION(REPLACE) if the option you want is not available on REBIND.
Change statements in host language Precompile, compile, and link the application and SQL statements program. Issue BIND with ACTION(REPLACE) for the package or plan.
Dropping objects
If you drop an object that a package depends on, the package might become invalid for the following reasons: v If the package is not appended to any running plan, the package becomes invalid. v If the package is appended to a running plan, and the drop occurs within that plan, the package becomes invalid. However, if the package is appended to a running plan, and the drop occurs outside of that plan, the object is not dropped, and the package does not become invalid. In all cases, the plan does not become invalid until it has a DBRM that references the dropped object. If the package or plan becomes invalid, automatic rebind occurs the next time the package or plan is allocated.
Rebinding a package
Table 44 clarifies which packages are bound, depending on how you specify collection-id (coll-id), package-id (pkg-id), and version-id (ver-id) on the REBIND PACKAGE subcommand. For syntax and descriptions of this subcommand, see Part 3 of DB2 Command Reference. REBIND PACKAGE does not apply to packages for which you do not have the BIND privilege. An asterisk (*) used as an identifier for collections, packages, or versions does not apply to packages at remote sites.
Table 44. Behavior of REBIND PACKAGE specification. All means all collections, packages, or versions at the local DB2 server for which the authorization ID that issues the command has the BIND privilege. Input * *.*.(*) *.* *.*.(ver-id) *.*.() coll-id.* coll-id.*.(*) coll-id.*.(ver-id) coll-id.*.() Collections affected all all all all all coll-id coll-id coll-id coll-id Packages affected all all all all all all all all all Versions affected all all all ver-id empty string all all ver-id empty string
387
Table 44. Behavior of REBIND PACKAGE specification (continued). All means all collections, packages, or versions at the local DB2 server for which the authorization ID that issues the command has the BIND privilege. Input coll-id.pkg-id.(*) coll-id.pkg-id coll-id.pkg-id.() coll-id.pkg-id.(ver-id) *.pkg-id.(*) *.pkg-id *.pkg-id.() *.pkg-id.(ver-id) Collections affected coll-id coll-id coll-id coll-id all all all all Packages affected pkg-id pkg-id pkg-id pkg-id pkg-id pkg-id pkg-id pkg-id Versions affected all empty string empty string ver-id all empty string empty string ver-id
Example: The following example shows the options for rebinding a package at the remote location. The location name is SNTERSA. The collection is GROUP1, the package ID is PROGA, and the version ID is V1. The connection types shown in the REBIND subcommand replace connection types that are specified on the original BIND subcommand. For information about the REBIND subcommand options, see DB2 Command Reference.
REBIND PACKAGE(SNTERSA.GROUP1.PROGA.(V1)) ENABLE(CICS,REMOTE)
You can use the asterisk on the REBIND subcommand for local packages, but not for packages at remote sites. Any of the following commands rebinds all versions of all packages in all collections, at the local DB2 system, for which you have the BIND privilege.
REBIND PACKAGE (*) REBIND PACKAGE (*.*) REBIND PACKAGE (*.*.(*))
Either of the following commands rebinds all versions of all packages in the local collection LEDGER for which you have the BIND privilege.
REBIND PACKAGE (LEDGER.*) REBIND PACKAGE (LEDGER.*.(*))
Either of the following commands rebinds the empty string version of the package DEBIT in all collections, at the local DB2 system, for which you have the BIND privilege.
REBIND PACKAGE (*.DEBIT) REBIND PACKAGE (*.DEBIT.())
Rebinding a plan
When you rebind a plan, use the PKLIST keyword to replace any previously specified package list. Omit the PKLIST keyword to use of the previous package list for rebinding. Use the NOPKLIST keyword to delete any package list that was specified when the plan was previously bound. Example: Rebinds PLANA and changes the package list:
REBIND PLAN(PLANA) PKLIST(GROUP1.*) MEMBER(ABC)
Example: Rebinds the plan and drops the entire package list:
REBIND PLAN(PLANA) NOPKLIST
388
Automatic rebinding
Automatic rebind might occur if an authorized user invokes a plan or package when the attributes of the data on which the plan or package depends change, or if the environment in which the package executes changes. Whether the automatic rebind occurs depends on the value of the field AUTO BIND on installation panel DSNTIPO. The options used for an automatic rebind are the options used during the most recent bind process.
389
| | | | | | | | | |
In most cases, DB2 marks a plan or package that needs to be automatically rebound as invalid. A few common situations in which DB2 marks a plan or package as invalid are: v When a package is dropped v When a plan depends on the execute privilege of a package that is dropped v When a table, index, or view on which the plan or package depends is dropped v When the authorization of the owner to access any of those objects is revoked v When the authorization to execute a stored procedure is revoked from a plan or package owner, and the plan or package uses the CALL literal form of the CALL statement to call the stored procedure v When a table on which the plan or package depends is altered to add a TIME, TIMESTAMP, or DATE column v When a table is altered to add a self-referencing constraint or a constraint with a delete rule of SET NULL or CASCADE v When the limit key value of a partitioned index on which the plan or package depends is altered v When the definition of an index on which the plan or package depends is altered from NOT PADDED to PADDED v When the AUDIT attribute of a table on which the plan or package depends is altered v When the length attribute of a CHAR, VARCHAR, GRAPHIC, or VARGRAPHIC column in a table on which the plan or package depends is altered v When the data type, precision, or scale of a column in a table on which the plan or package depends is altered v When a plan or package depends on a view that DB2 cannot regenerate after a column in the underlying table is altered v When a created temporary table on which the plan or package depends is altered to add a column v When a user-defined function on which the plan or package depends is altered Whether a plan or package is valid is recorded in column VALID of catalog tables SYSPLAN and SYSPACKAGE. In the following cases, DB2 might automatically rebind a plan or package that has not been marked as invalid: v A plan or package that is bound on release of DB2 that is more recent than the release in which it is being run. This can happen in a data sharing environment, or it can happen after a DB2 subsystem has fallen back to a previous release of DB2. v A plan or package that was bound prior to DB2 Version 2 Release 3. Plans and packages that are bound prior to Version 2 Release 3 will be automatically rebound when they are run on the current release of DB2. v A plan or package that has a location dependency and runs at a location other than the one at which it was bound. This can happen when members of a data sharing group are defined with location names, and a package runs on a different member from the one on which it was bound.
| | | | | | |
# # #
In the following cases, DB2 automatically rebinds a plan or package that has not been marked as invalid if the ABIND subsystem parameter is set to COEXIST: v The subsystem on which the plan or package runs is in a data sharing group.
390
# #
v The plan or package was previously bound on the current DB2 release and is now running on the previous DB2 release. DB2 marks a plan or package as inoperative if an automatic rebind fails. Whether a plan or package is operative is recorded in column OPERATIVE of SYSPLAN and SYSPACKAGE. Whether EXPLAIN runs during automatic rebind depends on the value of the field EXPLAIN PROCESSING on installation panel DSNTIPO, and on whether you specified EXPLAIN(YES). Automatic rebind fails for all EXPLAIN errors except PLAN_TABLE not found. The SQLCA is not available during automatic rebind. Therefore, if you encounter lock contention during an automatic rebind, DSNT501I messages cannot accompany any DSNT376I messages that you receive. To see the matching DSNT501I messages, you must issue the subcommand REBIND PLAN or REBIND PACKAGE.
391
392
393
before it was committed. Then, if As value is not later committed, but backed out, Bs calculations are based on uncommitted (and presumably incorrect) data. Unrepeatable reads. Some processes require the following sequence of events: A reads a row from the database and then goes on to process other SQL requests. Later, A reads the first row again and must find the same values it read the first time. Without control, process B could have changed the row between the two read operations. To prevent those situations from occurring unless they are specifically allowed, DB2 might use locks to control concurrency. What do locks do? A lock associates a DB2 resource with an application process in a way that affects how other processes can access the same resource. The process associated with the resource is said to hold or own the lock. DB2 uses locks to ensure that no process accesses data that has been changed, but not yet committed, by another process. What do you do about locks? To preserve data integrity, your application process acquires locks implicitly, that is, under DB2 control. It is not necessary for a process to request a lock explicitly to conceal uncommitted data. Therefore, sometimes you need not do anything about DB2 locks. Nevertheless processes acquire, or avoid acquiring, locks based on certain general parameters. You can make better use of your resources and improve concurrency by understanding the effects of those parameters.
Suspension
Definition: An application process is suspended when it requests a lock that is already held by another application process and cannot be shared. The suspended process temporarily stops running. Order of precedence for lock requests: Incoming lock requests are queued. Requests for lock promotion, and requests for a lock by an application process that already holds a lock on the same object, precede requests for locks by new applications. Within those groups, the request order is first in, first out. Example: Using an application for inventory control, two users attempt to reduce the quantity on hand of the same item at the same time. The two lock requests are queued. The second request in the queue is suspended and waits until the first request releases its lock. Effects: The suspended process resumes running when: v All processes that hold the conflicting lock release it. v The requesting process times out or deadlocks and the process resumes to deal with an error condition.
Timeout
Definition: An application process is said to time out when it is terminated because it has been suspended for longer than a preset interval.
394
Example: An application process attempts to update a large table space that is being reorganized by the utility REORG TABLESPACE with SHRLEVEL NONE. It is likely that the utility job will not release control of the table space before the application process times out. Effects: DB2 terminates the process, issues two messages to the console, and returns SQLCODE -911 or -913 to the process (SQLSTATEs '40001' or '57033'). Reason code 00C9008E is returned in the SQLERRD(3) field of the SQLCA. Alternatively, you can use the GET DIAGNOSTICS statement to check the reason code. If statistics trace class 3 is active, DB2 writes a trace record with IFCID 0196.
| |
IMS If you are using IMS, and a timeout occurs, the following actions take place: v In a DL/I batch application, the application process abnormally terminates with a completion code of 04E and a reason code of 00D44033 or 00D44050. v In any IMS environment except DL/I batch: DB2 performs a rollback operation on behalf of your application process to undo all DB2 updates that occurred during the current unit of work. For a non-message driven BMP, IMS issues a rollback operation on behalf of your application. If this operation is successful, IMS returns control to your application, and the application receives SQLCODE -911. If the operation is unsuccessful, IMS issues user abend code 0777, and the application does not receive an SQLCODE. For an MPP, IFP, or message driven BMP, IMS issues user abend code 0777, rolls back all uncommitted changes, and reschedules the transaction. The application does not receive an SQLCODE. COMMIT and ROLLBACK operations do not time out. The command STOP DATABASE, however, may time out and send messages to the console, but it will retry up to 15 times.
Deadlock
Definition: A deadlock occurs when two or more application processes each hold locks on resources that the others need and without which they cannot proceed. Example: Figure 141 on page 396 illustrates a deadlock between two transactions.
395
Table N (3) Job EMPLJCHG (1) OK Table M 000300 Page B (4) Suspend Job PROJNCHG (2) OK Suspend 000010 Page A
Notes: 1. Jobs EMPLJCHG and PROJNCHG are two transactions. Job EMPLJCHG accesses table M, and acquires an exclusive lock for page B, which contains record 000300. 2. Job PROJNCHG accesses table N, and acquires an exclusive lock for page A, which contains record 000010. 3. Job EMPLJCHG requests a lock for page A of table N while still holding the lock on page B of table M. The job is suspended, because job PROJNCHG is holding an exclusive lock on page A. 4. Job PROJNCHG requests a lock for page B of table M while still holding the lock on page A of table N. The job is suspended, because job EMPLJCHG is holding an exclusive lock on page B. The situation is a deadlock. Figure 141. A deadlock example
| |
Effects: After a preset time interval (the value of DEADLOCK TIME), DB2 can roll back the current unit of work for one of the processes or request a process to terminate. That frees the locks and allows the remaining processes to continue. If statistics trace class 3 is active, DB2 writes a trace record with IFCID 0172. Reason code 00C90088 is returned in the SQLERRD(3) field of the SQLCA. Alternatively, you can use the GET DIAGNOSTICS statement to check the reason code. (The codes that describe DB2s exact response depend on the operating environment; for details, see Part 5 of DB2 Application Programming and SQL Guide.) It is possible for two processes to be running on distributed DB2 subsystems, each trying to access a resource at the other location. In that case, neither subsystem can detect that the two processes are in deadlock; the situation resolves only when one process times out. Indications of deadlocks: In some cases, a deadlock can occur if two application processes attempt to update data in the same page or table space.
TSO, Batch, and CAF When a deadlock or timeout occurs in these environments, DB2 attempts to roll back the SQL for one of the application processes. If the ROLLBACK is successful, that application receives SQLCODE -911. If the ROLLBACK fails, and the application does not abend, the application receives SQLCODE -913.
396
IMS If you are using IMS, and a deadlock occurs, the following actions take place: v In a DL/I batch application, the application process abnormally terminates with a completion code of 04E and a reason code of 00D44033 or 00D44050. v In any IMS environment except DL/I batch: DB2 performs a rollback operation on behalf of your application process to undo all DB2 updates that occurred during the current unit of work. For a non-message driven BMP, IMS issues a rollback operation on behalf of your application. If this operation is successful, IMS returns control to your application, and the application receives SQLCODE -911. If the operation is unsuccessful, IMS issues user abend code 0777, and the application does not receive an SQLCODE. For an MPP, IFP, or message driven BMP, IMS issues user abend code 0777, rolls back all uncommitted changes, and reschedules the transaction. The application does not receive an SQLCODE.
CICS If you are using CICS and a deadlock occurs, the CICS attachment facility decides whether or not to roll back one of the application processes, based on the value of the ROLBE or ROLBI parameter. If your application process is chosen for rollback, it receives one of two SQLCODEs in the SQLCA: -911 A SYNCPOINT command with the ROLLBACK option was issued on behalf of your application process. All updates (CICS commands and DL/I calls, as well as SQL statements) that occurred during the current unit of work have been undone. (SQLSTATE '40001') A SYNCPOINT command with the ROLLBACK option was not issued. DB2 rolls back only the incomplete SQL statement that encountered the deadlock or timed out. CICS does not roll back any resources. Your application process should either issue a SYNCPOINT command with the ROLLBACK option itself or terminate. (SQLSTATE '57033')
-913
Consider using the DSNTIAC subroutine to check the SQLCODE and display the SQLCA. Your application must take appropriate actions before resuming.
397
398
| | | | | |
Replace a nonpartitioned index with a partitioned index only if there are perceivable benefits such as improved data or index availability, easier data or index maintenance, or improved performance. For examples of how query performance can be improved with data-partitioned secondary indexes, see Writing efficient queries on tables with data-partitioned secondary indexes on page 772. Fewer rows of data per page: By using the MAXROWS clause of CREATE or ALTER TABLESPACE, you can specify the maximum number of rows that can be on a page. For example, if you use MAXROWS 1, each row occupies a whole page, and you confine a page lock to a single row. Consider this option if you have a reason to avoid using row locking, such as in a data sharing environment where row locking overhead can be greater.
| | | | | |
Consider volatile tables to ensure index access: If multiple applications access the same table, consider defining the table as VOLATILE. DB2 uses index access whenever possible for volatile tables, even if index access does not appear to be the most efficient access method because of volatile statistics. Because each application generally accesses the rows in the table in the same order, lock contention can be reduced.
399
Even though an application might conform to the commit frequency standards of the installation under normal operational conditions, variation can occur based on system workload fluctuations. For example, a low-priority application might issue a commit frequently on a system that is lightly loaded. However, under a heavy system load, the use of the CPU by the application may be pre-empted, and, as a result, the application may violate the rule set by the UR CHECK FREQ parameter. For this reason, add logic to your application to commit based on time elapsed since last commit, and not solely based on the amount of SQL processing performed. In addition, take frequent commit points in a long running unit of work that is read-only to reduce lock contention and to provide opportunities for utilities, such as online REORG, to access the data. Retry an application after deadlock or timeout: Include logic in a batch program so that it retries an operation after a deadlock or timeout. Such a method could help you recover from the situation without assistance from operations personnel. Field SQLERRD(3) in the SQLCA returns a reason code that indicates whether a deadlock or timeout occurred. Alternatively, you can use the GET DIAGNOSTICS statement to check the reason code. Close cursors: If you define a cursor using the WITH HOLD option, the locks it needs can be held past a commit point. Use the CLOSE CURSOR statement as soon as possible in your program to cause those locks to be released and the resources they hold to be freed at the first commit point that follows the CLOSE CURSOR statement. Whether page or row locks are held for WITH HOLD cursors is controlled by the RELEASE LOCKS parameter on installation panel DSNTIP4. Closing cursors is particularly important in a distributed environment. Free locators: If you have executed the HOLD LOCATOR statement, the LOB locator holds locks on LOBs past commit points. Use the FREE LOCATOR statement to release these locks. Bind plans with ACQUIRE(USE): ACQUIRE(USE), which indicates that DB2 will acquire table and table space locks when the objects are first used and not when the plan is allocated, is the best choice for concurrency. Packages are always bound with ACQUIRE(USE), by default. ACQUIRE(ALLOCATE) can provide better protection against timeouts. Consider ACQUIRE(ALLOCATE) for applications that need gross locks instead of intent locks or that run with other applications that may request gross locks instead of intent locks. Acquiring the locks at plan allocation also prevents any one transaction in the application from incurring the cost of acquiring the table and table space locks. If you need ACQUIRE(ALLOCATE), you might want to bind all DBRMs directly to the plan. For information about intent and gross locks, see The mode of a lock on page 404. Bind with ISOLATION(CS) and CURRENTDATA(NO) typically: ISOLATION(CS) lets DB2 release acquired row and page locks as soon as possible. CURRENTDATA(NO) lets DB2 avoid acquiring row and page locks as often as possible. After that, in order of decreasing preference for concurrency, use these bind options: 1. ISOLATION(CS) with CURRENTDATA(YES), when data returned to the application must not be changed before your next FETCH operation. 2. ISOLATION(RS), when data returned to the application must not be changed before your application commits or rolls back. However, you do not care if other application processes insert additional rows.
| |
400
3. ISOLATION(RR), when data evaluated as the result of a query must not be changed before your application commits or rolls back. New rows cannot be inserted into the answer set. For more information about the ISOLATION option, see The ISOLATION option on page 412. | | | For updatable static scrollable cursors, ISOLATION(CS) provides the additional advantage of letting DB2 use optimistic concurrency control to further reduce the amount of time that locks are held. With optimistic concurrency control, DB2 releases the row or page locks on the base table after it materializes the result table in a temporary global table. DB2 also releases the row lock after each FETCH, taking a new lock on a row only for a positioned update or delete to ensure data integrity. For more information about optimistic concurrency control, see Advantages and disadvantages of the isolation values on page 412. For updatable dynamic scrollable cursors and ISOLATION(CS), DB2 holds row or page locks on the base table (DB2 does not use a temporary global table). The most recently fetched row or page from the base table remains locked to maintain data integrity for a positioned update or delete. Use ISOLATION(UR) cautiously: UR isolation acquires almost no locks on rows or pages. It is fast and causes little contention, but it reads uncommitted data. Do not use it unless you are sure that your applications and end users can accept the logical inconsistencies that can occur. For information on how to make an agent part of a global transaction for RRSAF applications, see Chapter 31, Programming for the Resource Recovery Services attachment facility, on page 893. | | | | | | | | | | | | | | | | | | | | Use sequence objects to generate unique, sequential numbers: Using an identity column is one way to generate unique sequential numbers. However, as a column of a table, an identity column is associated with and tied to the table, and a table can have only one identity column. Your applications might need to use one sequence of unique numbers for many tables or several sequences for each table. As a user-defined object, sequences provide a way for applications to have DB2 generate unique numeric key values and to coordinate the keys across multiple rows and tables. The use of sequences can avoid the lock contention problems that can result when applications implement their own sequences, such as in a one-row table that contains a sequence number that each transaction must increment. With DB2 sequences, many users can access and increment the sequence concurrently without waiting. DB2 does not wait for a transaction that has incremented a sequence to commit before allowing another transaction to increment the sequence again. Examine multi-row operations: In an application, multi-row inserts, positioned updates, and positioned deletes have the potential of expanding the unit of work. This can affect the concurrency of other users accessing the data. Minimize contention by adjusting the size of the host-variable-array, committing between inserts, updates, and preventing lock escalation. Use global transactions: The Resource Recovery Services attachment facility (RRSAF) relies on an z/OS component called Resource Recovery Services (RRS). RRS provides system-wide services for coordinating two-phase commit operations
Chapter 18. Planning for concurrency
401
across z/OS products. For RRSAF applications and IMS transactions that run under RRS, you can group together a number of DB2 agents into a single global transaction. A global transaction allows multiple DB2 agents to participate in a single global transaction and thus share the same locks and access the same data. When two agents that are in a global transaction access the same DB2 object within a unit of work, those agents will not deadlock with each other. The following restrictions apply: v There is no Parallel Sysplex support for global transactions. v Because each of the branches of a global transaction are sharing locks, uncommitted updates issued by one branch of the transaction are visible to other branches of the transaction. v Claim/drain processing is not supported across the branches of a global transaction, which means that attempts to issue CREATE, DROP, ALTER, GRANT, or REVOKE may deadlock or timeout if they are requested from different branches of the same global transaction. v LOCK TABLE may deadlock or timeout across the branches of a global transaction.
402
Row lock
Page lock
Row lock
Page lock
LOB lock
Row lock
Page lock
Row lock
Page lock
Row lock
Page lock
403
table can make data from other tables temporarily unavailable. That effect can be partly undone by using row locks instead of page locks. v In a segmented table space, rows from different tables are contained in different pages. Locking a page does not lock data from more than one table. Also, DB2 can acquire a table lock, which locks only the data from one specific table. Because a single row, of course, contains data from only one table, the effect of a row lock is the same as for a simple or partitioned table space: it locks one row of data from one table. v In a LOB table space, pages are not locked. Because there is no concept of a row in a LOB table space, rows are not locked. Instead, LOBs are locked. See LOB locks on page 425 for more information.
Effects
For maximum concurrency, locks on a small amount of data held for a short duration are better than locks on a large amount of data held for a long duration. However, acquiring a lock requires processor time, and holding a lock requires storage; thus, acquiring and holding one table space lock is more economical than acquiring and holding many page locks. Consider that trade-off to meet your performance and concurrency objectives. Duration of partition, table, and table space locks: Partition, table, and table space locks can be acquired when a plan is first allocated, or you can delay acquiring them until the resource they lock is first used. They can be released at the next commit point or be held until the program terminates. On the other hand, LOB table space locks are always acquired when needed and released at a commit or held until the program terminates. See LOB locks on page 425 for information about locking LOBs and LOB table spaces. Duration of page and row locks: If a page or row is locked, DB2 acquires the lock only when it is needed. When the lock is released depends on many factors, but it is rarely held beyond the next commit point. For information about controlling the duration of locks, see Bind options on page 408 for information about the ACQUIRE and RELEASE, ISOLATION, and CURRENTDATA bind options.
404
space locks are sometimes called gross modes. In the context of reading, SIX is a gross mode lock because you dont get page or row locks; in this sense, it is like an S lock. Example: An SQL statement locates John Smith in a table of customer data and changes his address. The statement locks the entire table space in mode IX and the specific row that it changes in mode X.
U (UPDATE)
IX (INTENT EXCLUSIVE)
S (SHARE)
U (UPDATE)
405
locks and read the data, but no concurrent process can acquire a U lock. The lock owner does not need page or row locks. U locks reduce the chance of deadlocks when the lock owner is reading data to determine whether to change it. U locks are acquired on a table space when locksize is TABLESPACE and the statement is a SELECT with a FOR UPDATE clause. Similarly, U locks are acquired on a table when lock size is TABLE and the statement is a SELECT with a FOR UPDATE clause. SIX (SHARE with INTENT EXCLUSIVE) The lock owner can read and change data in the table, partition, or table space. Concurrent processes can read data in the table, partition, or table space, but not change it. Only when the lock owner changes data does it acquire page or row locks. X (EXCLUSIVE) | | | | | The lock owner can read or change data in the table, partition, or table space. A concurrent process can access the data if the process runs with UR isolation or if data in a partitioned table space is running with CS isolation and CURRENTDATA((NO). The lock owner does not need page or row locks.
Compatibility for table space locks is slightly more complex. Table 46 on page 407 shows whether or not table space locks of any two modes are compatible.
406
Table 46. Compatibility of table and table space (or partition) lock modes Lock Mode IS IX S U SIX X IS Yes Yes Yes Yes Yes No IX Yes Yes No No No No S Yes No Yes Yes No No U Yes No Yes No No No SIX Yes No No No No No X No No No No No No
407
A query that uses index-only access might lock the data page or row, and that lock can contend with other processes that lock the data. However, using lock avoidance techniques can reduce the contention. See Lock avoidance on page 418 for more information about lock avoidance.
Bind options
These options determine when an application process acquires and releases its locks and to what extent it isolates its actions from possible effects of other processes acting concurrently. These options of bind operations are relevant to transaction locks: v The ACQUIRE and RELEASE options v The ISOLATION option on page 412 v The CURRENTDATA option on page 417
| |
408
numbers. They can perform several searches in succession. The application is bound with the options ACQUIRE(USE) and RELEASE(DEALLOCATE), for these reasons: v The alternative to ACQUIRE(USE), ACQUIRE(ALLOCATE), gets a lock of mode IX on the table space as soon as the application starts, because that is needed if an update occurs. But most uses of the application do not update the table and so need only the less restrictive IS lock. ACQUIRE(USE) gets the IS lock when the table is first accessed, and DB2 promotes the lock to mode IX if that is needed later. v Most uses of this application do not update and do not commit. For those uses, there is little difference between RELEASE(COMMIT) and RELEASE(DEALLOCATE). But administrators might update several phone numbers in one session with the application, and the application commits after each update. In that case, RELEASE(COMMIT) releases a lock that DB2 must acquire again immediately. RELEASE(DEALLOCATE) holds the lock until the application ends, avoiding the processing needed to release and acquire the lock several times. Partition locks: Partition locks follow the same rules as table space locks, and all partitions are held for the same duration. Thus, if one package is using RELEASE(COMMIT) and another is using RELEASE(DEALLOCATE), all partitions use RELEASE(DEALLOCATE). The RELEASE option and dynamic statement caching: Generally, the RELEASE option has no effect on dynamic SQL statements with one exception. When you use the bind options RELEASE(DEALLOCATE) and KEEPDYNAMIC(YES), and your subsystem is installed with YES for field CACHE DYNAMIC SQL on installation panel DSNTIP4, DB2 retains prepared SELECT, INSERT, UPDATE, and DELETE statements in memory past commit points. For this reason, DB2 can honor the RELEASE(DEALLOCATE) option for these dynamic statements. The locks are held until deallocation, or until the commit after the prepared statement is freed from memory, in the following situations: v The application issues a PREPARE statement with the same statement identifier. v The statement is removed from memory because it has not been used. v An object that the statement is dependent on is dropped or altered, or a privilege needed by the statement is revoked. v RUNSTATS is run against an object that the statement is dependent on. If a lock is to be held past commit and it is an S, SIX, or X lock on a table space or a table in a segmented table space, DB2 sometimes demotes that lock to an intent lock (IX or IS) at commit. DB2 demotes a gross lock if it was acquired for one of the following reasons: v DB2 acquired the gross lock because of lock escalation. v The application issued a LOCK TABLE. v The application issued a mass delete (DELETE FROM ... without a WHERE clause). For partitioned table spaces, lock demotion occurs for each partition for which there is a lock. Defaults: The defaults differ for different types of bind operations, as shown in Table 47 on page 410.
409
Table 47. Default ACQUIRE and RELEASE values for different bind options Operation BIND PLAN BIND PACKAGE Default values ACQUIRE(USE) and RELEASE(COMMIT). There is no option for ACQUIRE; ACQUIRE(USE) is always used. At the local server the default for RELEASE is the value used by the plan that includes the package in its package list. At a remote server the default is COMMIT. The existing values for the plan or package that is being rebound.
Recommendation: Choose a combination of values for ACQUIRE and RELEASE based on the characteristics of the particular application. The RELEASE option and DDL operations for remote requesters: When you perform DDL operations on behalf of remote requesters and RELEASE(DEALLOCATE) is in effect, be aware of the following condition. When a package that is bound with RELEASE(DEALLOCATE) accesses data at a server, it might prevent other remote requesters from performing CREATE, ALTER, DROP, GRANT, or REVOKE operations at the server. To allow those operations to complete, you can use the command STOP DDF MODE(SUSPEND). The command suspends server threads and terminates their locks so that DDL operations from remote requesters can complete. When these operations complete, you can use the command START DDF to resume the suspended server threads. However, even after the command STOP DDF MODE(SUSPEND) completes successfully, database resources might be held if DB2 is performing any activity other than inbound DB2 processing. You might have to use the command CANCEL THREAD to terminate other processing and thereby free the database resources.
410
Acquire an exclusive lock (IX, X) on a table space or partition that is started for read access only (ACCESS(RO)), thus prohibiting access by readers Disadvantages: This combination reduces concurrency. It can lock resources in high demand for longer than needed. Also, the option ACQUIRE(ALLOCATE) turns off selective partition locking; if you are accessing a partitioned table space, all partitions are locked. Restriction: This combination is not allowed for BIND PACKAGE. Use this combination if processing efficiency is more important than concurrency. It is a good choice for batch jobs that would release table and table space locks only to reacquire them almost immediately. It might even improve concurrency, by allowing batch jobs to finish sooner. Generally, do not use this combination if your application contains many SQL statements that are often not executed. ACQUIRE(USE) / RELEASE(DEALLOCATE): This combination results in the most efficient use of processing time in most cases. v A table, partition, or table space used by the plan or package is locked only if it is needed while running. v All tables or table spaces are unlocked only when the plan terminates. v The least restrictive lock needed to execute each SQL statement is used, with the exception that if a more restrictive lock remains from a previous statement, that lock is used without change. Disadvantages: This combination can increase the frequency of deadlocks. Because all locks are acquired in a sequence that is predictable only in an actual run, more concurrent access delays might occur. ACQUIRE(USE) / RELEASE(COMMIT): This combination is the default combination and provides the greatest concurrency, but it requires more processing time if the application commits frequently. v A table or table space is locked only when needed. That locking is important if the process contains many SQL statements that are rarely used or statements that are intended to access data only in certain circumstances. v All tables and table spaces are unlocked when:
| | |
TSO, Batch, and CAF An SQL COMMIT or ROLLBACK statement is issued, or your application process terminates
IMS A CHKP or SYNC call (for single-mode transactions), a GU call to the I/O PCB, or a ROLL or ROLB call is completed
CICS A SYNCPOINT command is issued. Exception: If the cursor is defined WITH HOLD, table or table space locks necessary to maintain cursor position are held past the commit point. (See The effect of WITH HOLD for a cursor on page 420 for more information.
Chapter 18. Planning for concurrency
411
v The least restrictive lock needed to execute each SQL statement is used except when a more restrictive lock remains from a previous statement. In that case, that lock is used without change. Disadvantages: This combination can increase the frequency of deadlocks. Because all locks are acquired in a sequence that is predictable only in an actual run, more concurrent access delays might occur. ACQUIRE(ALLOCATE) / RELEASE(COMMIT): This combination is not allowed; it results in an error message from BIND.
For more detailed examples, see DB2 Application Programming and SQL Guide. Recommendation: Choose an ISOLATION value based on the characteristics of the particular application.
412
data read by the inner SELECT can be changed by another transaction before it is read by the outer SELECT. Therefore, the information returned by this query might be from a row that is no longer the one with the maximum value for COL1. v In another case, if your process reads a row and returns later to update it, that row might no longer exist or might not exist in the state that it did when your application process originally read it. That is, another application might have deleted or updated the row. If your application is doing non-cursor operations on a row under the cursor, make sure the application can tolerate not found conditions. Similarly, assume another application updates a row after you read it. If your process returns later to update it based on the value you originally read, you are, in effect, erasing the update made by the other process. If you use isolation (CS) with update, your process might need to lock out concurrent updates. One method is to declare a cursor with the FOR UPDATE clause. General-use Programming Interface For packages and plans that contain updatable static scrollable cursors, ISOLATION(CS) lets DB2 use optimistic concurrency control. DB2 can use optimistic concurrency control to shorten the amount of time that locks are held in the following situations: v Between consecutive fetch operations v Between fetch operations and subsequent positioned update or delete operations DB2 cannot use optimistic concurrency control for dynamic scrollable cursors. With dynamic scrollabe cursors, the most recently fetched row or page from the base table remains locked to maintain position for a positioned update or delete. Figure 143 and Figure 144 on page 414 show processing of positioned update and delete operations with static scrollable cursors without optimistic concurrency control and with optimistic concurrency control.
Figure 143. Positioned updates and deletes without optimistic concurrency control
413
Figure 144. Positioned updates and deletes with optimistic concurrency control
Optimistic concurrency control consists of the following steps: 1. When the application requests a fetch operation to position the cursor on a row, DB2 locks that row, executes the FETCH, and releases the lock. 2. When the application requests a positioned update or delete operation on the row, DB2 performs the following steps: a. Locks the row. b. Reevaluates the predicate to ensure that the row still qualifies for the result table. c. For columns that are in the result table, compares current values in the row to the values of the row when step 1 was executed. Performs the positioned update or delete operation only if the values match. End of General-use Programming Interface ISOLATION (UR) Allows the application to read while acquiring few locks, at the risk of reading uncommitted data. UR isolation applies only to read-only operations: SELECT, SELECT INTO, or FETCH from a read-only result table. Reading uncommitted data introduces an element of uncertainty. Example: An application tracks the movement of work from station to station along an assembly line. As items move from one station to another, the application subtracts from the count of items at the first station and adds to the count of items at the second. Assume you want to query the count of items at all the stations, while the application is running concurrently. What can happen if your query reads data that the application has changed but has not committed? If the application subtracts an amount from one record before adding it to another, the query could miss the amount entirely. If the application adds first and then subtracts, the query could add the amount twice. # # # # When an application uses ISO(UR) and runs concurrently with applications that update variable-length records such that the update creates a double-overflow record, the ISO(UR) application might miss rows that are being updated.
414
If those situations can occur and are unacceptable, do not use UR isolation. Restrictions: You cannot use UR isolation for the following types of statements: v INSERT, UPDATE, and DELETE v Any cursor defined with a FOR UPDATE clause If you bind with ISOLATION(UR) and the statement does not specify WITH RR or WITH RS, DB2 uses CS isolation for thee types of statements. When can you use uncommitted read (UR)? You can probably use UR isolation in cases like the following ones: v When errors cannot occur. Example: A reference table, like a table of descriptions of parts by part number. It is rarely updated, and reading an uncommitted update is probably no more damaging than reading the table 5 seconds earlier. Go ahead and read it with ISOLATION(UR). Example: The employee table of Spiffy Computer, our hypothetical user. For security reasons, updates can be made to the table only by members of a single department. And that department is also the only one that can query the entire table. It is easy to restrict queries to times when no updates are being made and then run with UR isolation. v When an error is acceptable. Example: Spiffy wants to do some statistical analysis on employee data. A typical question is, What is the average salary by sex within education level? Because reading an occasional uncommitted record cannot affect the averages much, UR isolation can be used. v When the data already contains inconsistent information. Example: Spiffy gets sales leads from various sources. The data is often inconsistent or wrong, and end users of the data are accustomed to dealing with that. Inconsistent access to a table of data on sales leads does not add to the problem. Do not use uncommitted read (UR) in the following cases: When the computations must balance When the answer must be accurate When you are not sure it can do no damage ISOLATION (RS) Allows the application to read the same pages or rows more than once without allowing qualifying rows to be updated or deleted by another process. It offers possibly greater concurrency than repeatable read, because although other applications cannot change rows that are returned to the original application, they can insert new rows or update rows that did not satisfy the original applications search condition. Only those rows or pages that satisfy the stage 1 predicate (and all rows or pages evaluated during stage 2 processing) are locked until the application commits. Figure 145 on page 416 illustrates this. In the example, the rows held by locks L2 and L4 satisfy the predicate.
415
Time line Lock Unlock Lock Unlock Lock L L L1 L1 L2 DB2 Lock Unlock Lock L3 L3 L4
Figure 145. How an application using RS isolation acquires locks when no lock avoidance techniques are used. Locks L2 and L4 are held until the application commits. The other locks arent held.
Applications using read stability can leave rows or pages locked for long periods, especially in a distributed environment. If you do use read stability, plan for frequent commit points. ISOLATION (RR) Allows the application to read the same pages or rows more than once without allowing any UPDATE, INSERT, or DELETE by another process. All accessed rows or pages are locked, even if they do not satisfy the predicate. Figure 146 shows that all locks are held until the application commits. In the following example, the rows held by locks L2 and L4 satisfy the predicate.
Application Request row Time line Lock L DB2 Lock L1 Lock L2 Lock L3 Lock L4 Request next row
Figure 146. How an application using RR isolation acquires locks. All locks are held until the application commits.
Applications that use repeatable read can leave rows or pages locked for longer periods, especially in a distributed environment, and they can claim more logical partitions than similar applications using cursor stability. They are also subject to being drained more often by utility operations. Because so many locks can be taken, lock escalation might take place. Frequent commits release the locks and can help avoid lock escalation. With repeatable read, lock promotion occurs for table space scan to prevent the insertion of rows that might qualify for the predicate. (If access is via index, DB2 locks the key range. If access is via table space scans, DB2 locks the table, partition, or table space.)
416
Restrictions on concurrent access: An application using UR isolation cannot run concurrently with a utility that drains all claim classes. Also, the application must acquire the following locks: v A special mass delete lock acquired in S mode on the target table or table space. A mass delete is a DELETE statement without a WHERE clause; that operation must acquire the lock in X mode and thus cannot run concurrently. v An IX lock on any table space used in the work file database. That lock prevents dropping the table space while the application is running. v If LOB values are read, LOB locks and a lock on the LOB table space. If the LOB lock is not available because it is held by another application in an incompatible lock state, the UR reader skips the LOB and moves on to the next LOB that satisfies the query.
417
Application Request row or page Time line Lock Unlock Lock Unlock Lock L L L1 L1 L2 DB2 Unlock Lock Unlock Lock L2 L3 L3 L4 Request next row or page
Figure 147. How an application using CS isolation with CURRENTDATA(YES) acquires locks. This figure shows access to the base table. The L2 and L4 locks are released after DB2 moves to the next row or page. When the application commits, the last lock is released.
As with work files, if a cursor uses query parallelism, data is not necessarily current with the contents of the table or index, regardless of whether a work file is used. Therefore, for work file access or for parallelism on read-only queries, the CURRENTDATA option has no effect. If you are using parallelism but want to maintain currency with the data, you have the following options: v Disable parallelism (Use SET DEGREE = 1 or bind with DEGREE(1)). v Use isolation RR or RS (parallelism can still be used). v Use the LOCK TABLE statement (parallelism can still be used). For local access, CURRENTDATA(NO) is similar to CURRENTDATA(YES) except for the case where a cursor is accessing a base table rather than a result table in a work file. In those cases, although CURRENTDATA(YES) can guarantee that the cursor and the base table are current, CURRENTDATA(NO) makes no such guarantee. Remote access: For access to a remote table or index, CURRENTDATA(YES) turns off block fetching for ambiguous cursors. The data returned with the cursor is current with the contents of the remote table or index for ambiguous cursors. See Using block fetch in distributed applications on page 458 for more information about the effect of CURRENTDATA on block fetch. Lock avoidance: With CURRENTDATA(NO), you have much greater opportunity for avoiding locks. DB2 can test to see if a row or page has committed data on it. If it has, DB2 does not have to obtain a lock on the data at all. Unlocked data is returned to the application, and the data can be changed while the cursor is positioned on the row. (For SELECT statements in which no cursor is used, such as those that return a single row, a lock is not held on the row unless you specify WITH RS or WITH RR on the statement.) To take the best advantage of this method of avoiding locks, make sure all applications that are accessing data concurrently issue COMMITs frequently. Figure 148 on page 419 shows how DB2 can avoid taking locks and Table 54 on page 419 summarizes the factors that influence lock avoidance.
418
Application Request row or page Time line Test and avoid locks DB2 Test and avoid locks Request next row or page
Figure 148. Best case of avoiding locks using CS isolation with CURRENTDATA(NO). This figure shows access to the base table. If DB2 must take a lock, then locks are released when DB2 moves to the next row or page, or when the application commits (the same as CURRENTDATA(YES)). Table 54. Lock avoidance factors. Returned data means data that satisfies the predicate. Rejected data is that which does not satisfy the predicate. Avoid locks on returned data? N/A No Avoid locks on rejected data? N/A Yes1
Isolation UR
CS
NO
RS
N/A
RR
N/A
No
No
| # # # #
Note: 1. Locks are avoided when the row is disqualified after stage 1 processing 2. When using ISO(RS) and multi-row fetch, DB2 releases locks that were acquired on Stage 1 qualified rows, but which subsequently failed to qualify for stage 2 predicates at the next fetch of the cursor. Problems with ambiguous cursors: As shown in Table 54, ambiguous cursors can sometimes prevent DB2 from using lock avoidance techniques. However, misuse of an ambiguous cursor can cause your program to receive a -510 SQLCODE: v The plan or package is bound with CURRENTDATA(NO) v An OPEN CURSOR statement is performed before a dynamic DELETE WHERE CURRENT OF statement against that cursor is prepared v One of the following conditions is true for the open cursor: Lock avoidance is successfully used on that statement. Query parallelism is used.
Chapter 18. Planning for concurrency
419
The cursor is distributed, and block fetching is used. In all cases, it is a good programming technique to eliminate the ambiguity by declaring the cursor with either the FOR FETCH ONLY or the FOR UPDATE clause.
420
needed.) After the commit point, the lock is released at the next commit point, provided that no cursor is still positioned on that page or row. # # If your installation specifies YES on the RELEASE LOCKS field on installation panel DSNTIP4, data page or row locks are not held past commit. Table, table space, and DBD locks: All necessary locks are held past the commit point. After that, they are released according to the RELEASE option under which they were acquired: for COMMIT, at the next commit point after the cursor is closed; for DEALLOCATE, when the application is deallocated. Claims: All claims, for any claim class, are held past the commit point. They are released at the next commit point after all held cursors have moved off the object or have been closed.
finds the maximum, minimum, and average bonus in the sample employee table. The statement is executed with uncommitted read isolation, regardless of the value of ISOLATION with which the plan or package containing the statement is bound. Rules for the WITH clause: The WITH clause: v Can be used on these statements: Select-statement SELECT INTO Searched delete INSERT from fullselect Searched update v Cannot be used on subqueries. v Can specify the isolation levels that specifically apply to its statement. (For example, because WITH UR applies only to read-only operations, you cannot use it on an INSERT statement.) v Overrides the isolation level for the plan or package only for the statement in which it appears. | | | | | | | | | | USE AND KEEP ... LOCKS options of the WITH clause: If you use the WITH RR or WITH RS clause, you can use the USE AND KEEP EXCLUSIVE LOCKS, USE AND KEEP UPDATE LOCKS, USE AND KEEP SHARE LOCKS options in SELECT and SELECT INTO statements. Example: To use these options, specify them as shown in the following example:
SELECT ... WITH RS USE KEEP UPDATE LOCKS;
By using one of these options, you tell DB2 to acquire and hold a specific mode of lock on all the qualified pages or rows. Table 56 on page 422 shows which mode of lock is held on rows or pages when you specify the SELECT using the WITH RS or
Chapter 18. Planning for concurrency
421
| | | | | | | | | | | | | | | |
With read stability (RS) isolation, a row or page that is rejected during stage 2 processing might still have a lock held on it, even though it is not returned to the application. With repeatable read (RR) isolation, DB2 acquires locks on all pages or rows that fall within the range of the selection expression. All locks are held until the application commits. Although this option can reduce concurrency, it can prevent some types of deadlocks and can better serialize access to data.
Executing the statement requests a lock immediately, unless a suitable lock exists already. The bind option RELEASE determines when locks acquired by LOCK TABLE or LOCK TABLE with the PART option are released. You can use LOCK TABLE on any table, including auxiliary tables of LOB table spaces. See The LOCK TABLE statement for LOBs on page 428 for information about locking auxiliary tables. LOCK TABLE has no effect on locks acquired at a remote server.
422
Table 57. Modes of locks acquired by LOCK TABLE (continued). LOCK TABLE on partitions behave the same as nonsegmented table spaces. Nonsegmented Table Space Segmented Table Space Table Table Space
LOCK TABLE IN
Note: The SIX lock is acquired if the process already holds an IX lock. SHARE MODE has no effect if the process already has a lock of mode SIX, U, or X.
423
whether your application updates or not, use either LOCK IN EXCLUSIVE MODE or LOCK TABLE IN SHARE MODE.
Access paths
The access path used can affect the mode, size, and even the object of a lock. For example, an UPDATE statement using a table space scan might need an X lock on the entire table space. If rows to be updated are located through an index, the same statement might need only an IX lock on the table space and X locks on individual pages or rows. If you use the EXPLAIN statement to investigate the access path chosen for an SQL statement, then check the lock mode in column TSLOCKMODE of the resulting PLAN_TABLE. If the table resides in a nonsegmented table space, or is defined with LOCKSIZE TABLESPACE, the mode shown is that of the table space lock. Otherwise, the mode is that of the table lock. Important points about DB2 locks: v You usually do not have to lock data explicitly in your program. v DB2 ensures that your program does not retrieve uncommitted data unless you specifically allow that. v Any page or row where your program updates, inserts, or deletes stays locked at least until the end of a unit of work, regardless of the isolation level. No other process can access the object in any way until then, unless you specifically allow that access to that process. v Commit often for concurrency. Determine points in your program where changed data is consistent. At those points, issue:
IMS A CHKP or SYNC call, or (for single-mode transactions) a GU call to the I/O PCB
CICS A SYNCPOINT command. v Bind with ACQUIRE(USE) to improve concurrency. v Set ISOLATION (usually RR, RS, or CS) when you bind the plan or package. With RR (repeatable read), all accessed pages or rows are locked until the next commit point. (See Recommendations for database design on page 398 for information about cursor position locks for cursors defined WITH HOLD.) With RS (read stability), all qualifying pages or rows are locked until the next commit point. (See Recommendations for application design on page 399 for information about cursor position locks for cursors defined WITH HOLD.) With CS (cursor stability), only the pages or rows currently accessed can be locked, and those locks might be avoided. (You can access one page or row for each open cursor.)
424
v You can also set isolation for specific SQL statements, using WITH. v A deadlock can occur if two processes each hold a resource that the other needs. One process is chosen as victim, its unit of work is rolled back, and an SQL error code is issued. v You can lock an entire nonsegmented table space, or an entire table in a segmented table space, by the LOCK TABLE statement: To let other users retrieve, but not update, delete, or insert, issue:
LOCK TABLE table-name IN SHARE MODE
To prevent other users from accessing rows in any way, except by using UR isolation, issue:
LOCK TABLE table-name IN EXCLUSIVE MODE
LOB locks
The locking activity for LOBs is described separately from transaction locks because the purpose of LOB locks is different than that of regular transaction locks. A lock that is taken on a LOB value in a LOB table space is called a LOB lock. In this section, the following topics are described: v Relationship between transaction locks and LOB locks v Hierarchy of LOB locks on page 426 v LOB and LOB table space lock modes on page 426 v LOB lock and LOB table space lock duration on page 427 v Instances when LOB table space locks are not taken on page 428 v The LOCK TABLE statement for LOBs on page 428
# # #
425
In summary, the main purpose of LOB locks is for managing the space used by LOBs and to ensure that LOB readers do not read partially updated LOBs. Applications need to free held locators so that the space can be reused. Table 58 shows the relationship between the action that is occurring on the LOB value and the associated LOB table space and LOB locks that are acquired.
Table 58. Locks that are acquired for operations on LOBs. This table does not account for gross locks that can be taken because of LOCKSIZE TABLESPACE, the LOCK TABLE statement, or lock escalation. Action on LOB value Read (including UR) LOB table space lock IS LOB lock S Comment Prevents storage from being reused while the LOB is being read or while locators are referencing the LOB Prevents other processes from seeing a partial LOB To hold space in case the delete is rolled back. (The X is on the base table row or page.) Storage is not reusable until the delete is committed and no other readers of the LOB exist. Operation is a delete followed by an insert.
Insert Delete
IX IS
X S
Update
IS->IX
Two LOB locks: an S-lock for the delete and an X-lock for the insert. S X
Update the LOB to null IS or zero-length Update a null or zero-length LOB to a value IX
ISOLATION(UR) or ISOLATION(CS): When an application is reading rows using uncommitted read or lock avoidance, no page or row locks are taken on the base table. Therefore, these readers must take an S LOB lock to ensure that they are not reading a partial LOB or a LOB value that is inconsistent with the base row.
426
427
more locks than a similar statement that does not involve LOB columns. To prevent system problems caused by too many locks, you can: v Ensure that you have lock escalation enabled for the LOB table spaces that are involved in the INSERT. In other words, make sure that LOCKMAX is non-zero for those LOB table spaces. v Alter the LOB table space to change the LOCKSIZE to TABLESPACE before executing the INSERT with fullselect. v Increase the LOCKMAX value on the table spaces involved and ensure that the user lock limit is sufficient. v Use LOCK TABLE statements to lock the LOB table spaces. (Locking the auxiliary table that is contained in the LOB table space locks the LOB table space.)
428
v You can use LOCK TABLE IN EXCLUSIVE MODE to prevent other applications from accessing LOBs. With auxiliary tables, LOCK TABLE IN EXCLUSIVE MODE also prevents access from uncommitted readers. v Either statement eliminates the need for lower-level LOB locks.
429
430
For more information about unit of work, see Chapter 1 of DB2 SQL Reference or Part 4 (Volume 1) of DB2 Administration Guide.
431
432
A unit of work is marked as complete by a commit or synchronization (sync) point, which is defined: v Implicitly at the end of a transaction, signalled by a CICS RETURN command at the highest logical level. v Explicitly by CICS SYNCPOINT commands that the program issues at logically appropriate points in the transaction. v Implicitly through a DL/I PSB termination (TERM) call or command. v Implicitly when a batch DL/I program issues a DL/I checkpoint call. This can occur when the batch DL/I program is sharing a database with CICS applications through the database sharing facility. Consider the inventory example, in which the quantity of items sold is subtracted from the inventory file and then added to the reorder file. When both transactions complete (and not before) and the data in the two files is consistent, the program can then issue a DL/I TERM call or a SYNCPOINT command. If one of the steps fails, you want the data to return to the value it had before the unit of work began. That is, you want it rolled back to a previous point of consistency. You can achieve this by using the SYNCPOINT command with the ROLLBACK option. By using a SYNCPOINT command with the ROLLBACK option, you can back out uncommitted data changes. For example, a program that updates a set of related rows sometimes encounters an error after updating several of them. The program can use the SYNCPOINT command with the ROLLBACK option to undo all of the updates without giving up control. The SQL COMMIT and ROLLBACK statements are not valid in a CICS environment. You can coordinate DB2 with CICS functions that are used in programs, so that DB2 and non-DB2 data are consistent. If the system fails, DB2 backs out uncommitted changes to data. Changed data returns to its original condition without interfering with other system activities. Sometimes, DB2 data does not return to a consistent state immediately. DB2 does not process indoubt data (data that is neither uncommitted nor committed) until the CICS attachment facility is also restarted. To ensure that DB2 and CICS are synchronized, restart both DB2 and the CICS attachment facility.
433
A commit point can occur in a program as the result of any one of the following four events: v The program terminates normally. Normal program termination is always a commit point. v The program issues a checkpoint call. Checkpoint calls are a programs means of explicitly indicating to IMS that it has reached a commit point in its processing. v The program issues a SYNC call. The SYNC call is a Fast Path system service call to request commit-point processing. You can use a SYNC call only in a nonmessage-driven Fast Path program. v For a program that processes messages as its input, a commit point can occur when the program retrieves a new message. IMS considers a new message the start of a new unit of work in the program. Commit points occur given the following conditions: If you specify single-mode, a commit point in DB2 occurs each time the program issues a call to retrieve a new message. Specifying single-mode can simplify recovery; you can restart the program from the most recent call for a new message if the program abends. When IMS restarts the program, the program starts by processing the next message. If you specify multiple-mode, a commit point occurs when the program issues a checkpoint call or when it terminates normally. Those are the only times during the program that IMS sends the programs output messages to their destinations. Because fewer commit points are processed in multiple-mode programs than in single-mode programs, multiple-mode programs could perform slightly better than single-mode programs. When a multiple-mode program abends, IMS can restart it only from a checkpoint call. Instead of having only the most recent message to reprocess, a program might have several messages to reprocess. The number of messages to process depends on when the program issued the last checkpoint call. If you do not define the transaction as single- or multiple-mode on the TRANSACT statement of the APPLCTN macro for the program, retrieving a new message does not signal a commit point. For more information about the APPLCTN macro, see IMS Install Volume 2: System Definition and Tailoring. DB2 does some processing with single- and multiple-mode programs. When a multiple-mode program issues a call to retrieve a new message, DB2 performs an authorization check and closes all open cursors in the program. At the time of a commit point: v IMS and DB2 can release locks that the program has held since the last commit point. That makes the data available to other application programs and users. (However, when you define a cursor as WITH HOLD in a BMP program, DB2 holds those locks until the cursor closes or the program ends.) v DB2 closes any open cursors that the program has been using. Your program must issue CLOSE CURSOR statements before a checkpoint call or a GU to the message queue, not after. v IMS and DB2 make the programs changes to the database permanent. If the program abends before reaching the commit point: v Both IMS and DB2 back out all the changes the program has made to the database since the last commit point. v IMS deletes any output messages that the program has produced since the last commit point (for nonexpress PCBs).
434
If the program processes messages, IMS sends the output messages that the application program produces to their final destinations. Until the program reaches a commit point, IMS holds the programs output messages at a temporary destination. If the program abends, people at terminals and other application programs receive information from the terminating application program. The SQL COMMIT and ROLLBACK statements are not valid in an IMS environment. If the system fails, DB2 backs out uncommitted changes to data. Changed data returns to its original state without interfering with other system activities. Sometimes DB2 data does not return to a consistent state immediately. DB2 does not process data in an indoubt state until you restart IMS. To ensure that DB2 and IMS are synchronized, you must restart both DB2 and IMS.
435
areas to the way they were when the program terminated abnormally, and it restarts the program from the last checkpoint call that the program issued before terminating abnormally.
436
v How long it takes to back out and recover that unit of work. The program must issue checkpoints frequently enough to make the program easy to back out and recover. v How long database resources are locked in DB2 and IMS v How you want the output messages grouped. Checkpoint calls establish how a multiple-mode program groups its output messages. Programs must issue checkpoints frequently enough to avoid building up too many output messages.
437
v v
438
A ROLL call with disk logging and BKO=YES specified. DL/I backs out updates, and abend U0778 occurs. DB2 backs out updates to the previous checkpoint. A ROLB call with disk logging and BKO=YES specified. DL/I backs out databases, and control is passed back to the application program. DB2 backs out updates to the previous checkpoint.
Using ROLL
Issuing a ROLL call causes IMS to terminate the program with a user abend code U0778. This terminates the program without a storage dump. When you issue a ROLL call, the only option you supply is the call function, ROLL.
Using ROLB
The advantage of using ROLB is that IMS returns control to the program after executing ROLB, allowing the program to continue processing. The options for ROLB are: v The call function, ROLB v The name of the I/O PCB
In batch programs
If your IMS system log is on direct access storage, and if the run option BKO is Y to specify dynamic back out, you can use the ROLB call in a batch program. The ROLB call backs out the database updates made since the last commit point and returns control to your program. You cannot specify the address of an I/O area as one of the options on the call; if you do, your program receives an AD status code. You must, however, have an I/O PCB for your program. Specify CMPAT=YES on the CMPAT keyword in the PSBGEN statement for your programs PSB. For more information about using the CMPAT keyword, see IMS Utilities Reference: System.
439
When savepoints are active, you cannot access remote sites using three-part names or aliases for three-part names. You can, however, use DRDA access with explicit CONNECT statements when savepoints are active. If you set a savepoint before you execute a CONNECT statement, the scope of that savepoint is the local site. If you set a savepoint after you execute the CONNECT statement, the scope of that savepoint is the site to which you are connected. Example: Setting savepoints during distributed processing: Suppose that an application performs these tasks: 1. Sets savepoint C1 2. Does some local processing 3. Executes a CONNECT statement to connect to a remote site 4. Sets savepoint C2 Because savepoint C1 is set before the application connects to a remote site, savepoint C1 is known only at the local site. However, because savepoint C2 is set after the application connects to the remote site, savepoint C2 is known only at the remote site. You can set a savepoint with the same name multiple times within a unit of work. Each time that you set the savepoint, the new value of the savepoint replaces the old value. Example: Setting a savepoint multiple times: Suppose that the following actions take place within a unit of work: 1. Application A sets savepoint S. 2. Application A calls stored procedure P. 3. Stored procedure P sets savepoint S. 4. Stored procedure P executes ROLLBACK TO SAVEPOINT S. When DB2 executes ROLLBACK to SAVEPOINT S, DB2 rolls back work to the savepoint that was set in the stored procedure because that value is the most recent value of savepoint S. If you do not want a savepoint to have different values within a unit of work, you can use the UNIQUE option in the SAVEPOINT statement. If an application executes a SAVEPOINT statement for a savepoint that was previously defined as unique, an SQL error occurs. Savepoints are automatically released at the end of a unit of work. However, if you no longer need a savepoint before the end of a transaction, you should execute the SQL RELEASE SAVEPOINT statement. Releasing savepoints is essential if you need to use three-part names to access remote locations. Restrictions on using savepoints: You cannot use savepoints in global transactions, triggers, or user-defined functions, or in stored procedures, user-defined functions, or triggers that are nested within triggers or user-defined functions. For more information about the SAVEPOINT, ROLLBACK TO SAVEPOINT, and RELEASE SAVEPOINT statements, see Chapter 5 of DB2 SQL Reference.
440
441
| | |
442
For static SQL applications, search for all SQL statements that include three-part names and aliases for three-part names. For three-part names, the high-level qualifier is the location name. For potential aliases, query the catalog table SYSTABLES to determine whether the object is an alias, and if so, the location name of the table that the alias represents. For example:
SELECT NAME, CREATOR, LOCATION, TBCREATOR, TBNAME FROM SYSIBM.SYSTABLES WHERE NAME=name AND TYPE=A;
where name is the potential alias. For dynamic SQL applications, bind packages at all remote locations that users might access with three-part names. 2. Bind the application into a package at every location that is named in the application. Also bind a package locally. For an application that uses explicit CONNECT statements to connect to a second site and then accesses a third site using a three-part name, bind a package at the second site with DBPROTOCOL(DRDA), and bind another package at the third site. 3. Bind all remote packages into a plan with the local package or DBRM. Bind this plan with the option DBPROTOCOL(DRDA). 4. Ensure that aliases resolve correctly. For DB2 private protocol access, DB2 resolves aliases at the requester site. For DRDA access, however, DB2 resolves aliases at the site where the package executes. Therefore, you might need to define aliases for three-part names at remote locations. For example, suppose you use DRDA access to run a program that contains this statement:
SELECT * FROM MYALIAS;
MYALIAS is an alias for LOC2.MYID.MYTABLE. DB2 resolves MYALIAS at the local site to determine that this statement needs to run at LOC2 but does not send the resolved name to LOC2. When the statement executes at LOC2, DB2 resolves MYALIAS using the catalog at LOC2. If the catalog at LOC2 does not contain the alias MYID.MYTABLE for MYALIAS, the SELECT statement does not execute successfully. This situation can become more complicated if you use three-part names to access DB2 objects from remote sites. For example, suppose you are connected explicitly to LOC2, and you use DRDA access to execute the following statement:
SELECT * FROM YRALIAS;
YRALIAS is an alias for LOC3.MYID.MYTABLE. When this SELECT statement executes at LOC3, both LOC2 and LOC3 must have an alias YRALIAS that resolves to MYID.MYTABLE at location LOC3. 5. If you use the resource limit facility at the remote locations that are specified in three-part names to control the amount of time that distributed dynamic SQL statements run, modify the resource limit specification tables at those locations. For DB2 private protocol access, you specify plan names to govern SQL statements that originate at a remote location. For DRDA access, you specify package names for this purpose. Therefore, you must add rows to your resource limit specification tables at the remote locations for the packages you
443
bound for DRDA access with three-part names. You should also delete the rows that specify plan names for DB2 private protocol access. For more information about the resource limit facility, see Part 5 (Volume 2) of DB2 Administration Guide.
This statement can be executed with DRDA access or DB2 private protocol access. The method of access depends on whether you bind your DBRMs into packages and on the value of the DATABASE PROTOCOL field in installation panel DSNTIP5 or the value of bind option DBPROTOCOL. Bind option DBPROTOCOL overrides the installation setting. If you bind the DBRM that contains the statement into a plan at the local DB2 and specify the bind option DBPROTOCOL(PRIVATE), you access the server by using DB2 private protocol access. If you bind the DBRM that contains the statement by using one of the following processes, you access the server using DRDA access: Local-bind DRDA access process: 1. Bind the DBRM into a package at the local DB2 using the bind option DBPROTOCOL(DRDA). 2. Bind the DBRM into a package at the remote location (CHICAGO). 3. Bind the packages into a plan using bind option DBPROTOCOL(DRDA). Remote-bind DRDA access process: 1. Bind the DBRM into a package at the remote location. 2. Bind the remote package and the DBRM into a plan at the local site, using the bind option DBPROTOCOL(DRDA). In some cases you cannot use private protocol to access distributed data. The following examples require DRDA access. Example: Suppose that you need to access data at a remote server CHICAGO, by using the following CONNECT and SELECT statements:
EXEC SQL CONNECT TO CHICAGO; EXEC SQL SELECT * FROM DSN8810.EMP WHERE EMPNO = 0001000;
| |
This example requires DRDA access and the correct binding procedure to work from a remote server. Before you can execute the query at location CHICAGO, you must bind the application as a remote package at the CHICAGO server. Before you can run the application, you must also bind a local package and a local plan with a package list that includes the local and remote package.
444
Example: Suppose that you need to call a stored procedure at the remote server ATLANTA, by using the following CONNECT and CALL statements:
EXEC SQL CONNECT TO ATLANTA; EXEC SQL CALL procedure_name (parameter_list);
This example requires DRDA access because private protocol does not support stored procedures. The parameter list is a list of host variables that is passed to the stored procedure and into which it returns the results of its execution. To execute, the stored procedure must already exist at the ATLANTA server.
445
SQLRULES Use SQLRULES(DB2), explicitly or by default. SQLRULES(STD) applies the rules of the SQL standard to your CONNECT statements, so that CONNECT TO x is an error if you are already connected to x. Use STD only if you want that statement to return an error code. If your program selects LOB data from a remote location, and you bind the plan for the program with SQLRULES(DB2), the format in which you retrieve the LOB data with a cursor is restricted. After you open the cursor to retrieve the LOB data, you must retrieve all of the data using a LOB variable, or retrieve all of the data using a LOB locator variable. If the value of SQLRULES is STD, this restriction does not exist. If you intend to switch between LOB variables and LOB locators to retrieve data from a cursor, execute the SET SQLRULES=STD statement before you connect to the remote location. CURRENTDATA Use CURRENTDATA(NO) to force block fetch for ambiguous cursors. See Using block fetch in distributed applications on page 458 for more information. DBPROTOCOL Use DBPROTOCOL(PRIVATE) if you want DB2 to use DB2 private protocol access for accessing remote data that is specified with three-part names. Use DBPROTOCOL(DRDA) if you want DB2 to use DRDA access to access remote data that is specified with three-part names. You must bind a package at all locations whose names are specified in three-part names. The package value for the DBPROTOCOL option overrides the plan option. For example, if you specify DBPROTOCOL(DRDA) for a remote package and DBPROTOCOL(PRIVATE) for the plan, DB2 uses DRDA access when it accesses data at that location using a three-part name. If you do not specify any value for DBPROTOCOL, DB2 uses the value of DATABASE PROTOCOL on installation panel DSNTIP5. ENCODING Use this option to control the encoding scheme that is used for static SQL statements in the plan and to set the initial value of the CURRENT APPLICATION ENCODING SCHEME special register. For applications that execute remotely and use explicit CONNECT statements, DB2 uses the ENCODING value for the plan. For applications that execute remotely and use implicit CONNECT statements, DB2 uses the ENCODING value for the package that is at the site where a statement executes.
446
statements that are valid on the remote server but that the precompiler did not recognize. Otherwise, use SQLERROR(NOPACKAGE), explicitly or by default. CURRENTDATA Use CURRENTDATA(NO) to force block fetch for ambiguous cursors. See Using block fetch in distributed applications on page 458 for more information. OPTIONS When you make a remote copy of a package using BIND PACKAGE with the COPY option, use this option to control the default bind options that DB2 uses. Specify: COMPOSITE to cause DB2 to use any options you specify in the BIND PACKAGE command. For all other options, DB2 uses the options of the copied package. COMPOSITE is the default. COMMAND to cause DB2 to use the options you specify in the BIND PACKAGE command. For all other options, DB2 uses the defaults for the server on which the package is bound. This helps ensure that the server supports the options with which the package is bound. DBPROTOCOL Use DBPROTOCOL(PRIVATE) if you want DB2 to use DB2 private protocol access for accessing remote data that is specified with three-part names. Use DBPROTOCOL(DRDA) if you want DB2 to use DRDA access to access remote data that is specified with three-part names. You must bind a package at all locations whose names are specified in three-part names. These values override the value of DATABASE PROTOCOL on installation panel DSNTIP5. Therefore, if the setting of DATABASE PROTOCOL at the requester site specifies the type of remote access you want to use for three-part names, you do not need to specify the DBPROTOCOL bind option. ENCODING Use this option to control the encoding scheme that is used for static SQL statements in the package and to set the initial value of the CURRENT APPLICATION ENCODING SCHEME special register. The default ENCODING value for a package that is bound at a remote DB2 UDB for z/OS server is the system default for that server. The system default is specified at installation time in the APPLICATION ENCODING field of panel DSNTIPF. For applications that execute remotely and use explicit CONNECT statements, DB2 uses the ENCODING value for the plan. For applications that execute remotely and use implicit CONNECT statements, DB2 uses the ENCODING value for the package that is at the site where a statement executes.
447
v For a list of DB2 bind options in generic terms, including options you cannot request from DB2 but can use if you request from a non-DB2 server, see Appendix I, Program preparation options for remote packages, on page 1123.
448
Read input values Do for all locations Read location name Set up statement to prepare Prepare statement Execute statement End loop Commit
After the application obtains a location name, for example SAN_JOSE, it next creates the following character string:
INSERT INTO SAN_JOSE.DSN8810.PROJ VALUES (?,?,?,?,?,?,?,?)
The application assigns the character string to the variable INSERTX and then executes these statements:
EXEC SQL PREPARE STMT1 FROM :INSERTX; EXEC SQL EXECUTE STMT1 USING :PROJNO, :PROJNAME, :DEPTNO, :RESPEMP, :PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ;
The host variables for Spiffys project table match the declaration for the sample project table in Project table (DSN8810.PROJ) on page 1000. To keep the data consistent at all locations, the application commits the work only when the loop has executed for all locations. Either every location has committed the INSERT or, if a failure has prevented any location from inserting, all other locations have rolled back the INSERT. (If a failure occurs during the commit process, the entire unit of work can be indoubt.) Recommendation: You might find it convenient to use aliases when creating character strings that become prepared statements, instead of using full three-part names like SAN_JOSE.DSN8810.PROJ. For information about aliases, see Chapter 5 of DB2 SQL Reference.
449
EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */ EXEC SQL DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */ (CHARCOL CHAR(6) NOT NULL); /* at the remote site */ EXEC SQL CONNECT RESET; /* Connect back to local site */ EXEC SQL INSERT INTO CHICAGO.SESSION.T1 (VALUES ABCDEF); /* Access the temporary table*/ /* at the remote site (forward reference) */
However, you cannot perform the following series of actions, which includes a backward reference to the declared temporary table:
EXEC SQL DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */ (CHARCOL CHAR(6) NOT NULL); /* at the local site (ATLANTA)*/ EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */ EXEC SQL INSERT INTO ATLANTA.SESSION.T1 (VALUES ABCDEF); /* Cannot access temp table */ /* from the remote site (backward reference)*/
For example, the application inserts a new location name into the variable LOCATION_NAME and executes the following statements:
EXEC SQL CONNECT TO :LOCATION_NAME; EXEC SQL INSERT INTO DSN8810.PROJ VALUES (:PROJNO, :PROJNAME, :DEPTNO, :RESPEMP, :PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ);
To keep the data consistent at all locations, the application commits the work only when the loop has executed for all locations. Either every location has committed the INSERT or, if a failure has prevented any location from inserting, all other locations have rolled back the INSERT. (If a failure occurs during the commit process, the entire unit of work can be indoubt.)
450
The host variables for Spiffys project table match the declaration for the sample project table in Project table (DSN8810.PROJ) on page 1000. LOCATION_NAME is a character-string variable of length 16. | | | | | | | | | | | | | | | | | | | | |
Releasing connections
When you connect to remote locations explicitly, you must also break those connections explicitly. To break the connections, you can use the RELEASE statement. The RELEASE statement differs from the CONNECT statement in the following ways: v While the CONNECT statement makes an immediate connection, the RELEASE statement does not immediately break a connection. The RELEASE statement labels connections for release at the next commit point. A connection that has been labeled for release is in the release-pending state and can still be used before the next commit point. v While the CONNECT statement connects to exactly one remote system, you can use the RELEASE statement to specify a single connection or a set of connections for release at the next commit point. Example: By using the RELEASE statement, you can place any of the following connections in the release-pending state: v A specific connection that the next unit of work does not use:
EXEC SQL RELEASE SPIFFY1;
v All DB2 private protocol connections. If the first phase of your application program uses DB2 private protocol access and the second phase uses DRDA access, open DB2 private protocol connections from the first phase could cause a
Chapter 20. Planning to access distributed data
451
CONNECT operation to fail in the second phase. To prevent that error, execute the following statement before the commit operation that separates the two phases:
EXEC SQL RELEASE ALL PRIVATE;
PRIVATE refers to DB2 private protocol connections, which exist only between instances of DB2 UDB for z/OS.
452
CICS and IMS You cannot update data at servers that do not support two-phase commit.
TSO and batch You can update data if one of the following conditions is true: v No other connections exist. v All existing connections are to servers that are restricted to read-only operations. If neither condition is met, you are restricted to read-only operations. If the first connection in a logical unit of work is to a server that supports two-phase commit, and no connections exist or only read-only connections exist, that server and all servers that support two-phase commit can update data. However, if the first connection is to a server that does not support two-phase commit, only that server is allowed to update data. Recommendation: Rely on DB2 to prevent updates to two systems in the same unit of work if either of them is a restricted system.
453
454
v Use LOB locators instead of LOB host variables: If you need to store only a portion of a LOB value at the client, or if your client program manipulates the LOB data but does not need a copy of it, LOB locators are a good choice. When a client program retrieves a LOB column from a server into a locator, DB2 transfers only the 4-byte locator value to the client, not the entire LOB value. For information about how to use LOB locators in an application, see Using LOB locators to save storage on page 305. v Use stored procedure result sets: When you return LOB data to a client program from a stored procedure, use result sets, rather than passing the LOB data to the client in parameters. Using result sets to return data causes less LOB materialization and less movement of data among address spaces. For information about how to write a stored procedure to return result sets, see Writing a stored procedure to return result sets to a DRDA client on page 650. For information about how to write a client program to receive result sets, see Writing a DB2 UDB for z/OS client program or SQL procedure to receive result sets on page 708. v Set the CURRENT RULES special register to DB2: When a DB2 UDB for z/OS server receives an OPEN request for a cursor, the server uses the value in the CURRENT RULES special register to determine the type of host variables the associated statement uses to retrieve LOB values. If you specify a value of DB2 for CURRENT RULES before you perform a CONNECT, and the first FETCH for the cursor uses a LOB locator to retrieve LOB column values, DB2 lets you use only LOB locators for all subsequent FETCH statements for that column until you close the cursor. If the first FETCH uses a host variable, DB2 lets you use only host variables for all subsequent FETCH statements for that column until you close the cursor. However, if you set the value of CURRENT RULES to STD, DB2 lets you use the same open cursor to fetch a LOB column into either a LOB locator or a host variable. Although a value of STD for CURRENT RULES gives you more programming flexibility when you retrieve LOB data, you get better performance if you use a value of DB2 for CURRENT RULES. With the STD option, the server must send and receive network messages for each FETCH to indicate whether the data that is being transferred is a LOB locator or a LOB value. With the DB2 option, the server knows the size of the LOB data after the first FETCH, so an extra message about LOB data size is unnecessary. The server can send multiple blocks of data to the requester at one time, which reduces the total time for data transfer. Example: Suppose that an end user wants to browse through a large set of employee records and look at pictures of only a few of those employees. At the server, you set the CURRENT RULES special register to DB2. In the application, you declare and open a cursor to select employee records. The application then fetches all picture data into 4-byte LOB locators. Because DB2 knows that 4 bytes of LOB data is returned for each FETCH, DB2 can fill the network buffers with locators for many pictures. When a user wants to see a picture for a particular person, the application can retrieve the picture from the server by assigning the value that is referenced by the LOB locator to a LOB host variable:
SQL TYPE IS BLOB my_blob[1M]; SQL TYPE IS BLOB AS LOCATOR my_loc; . . . FETCH C1 INTO :my_loc; /* Fetch BLOB into LOB locator */ . . . SET :my_blob = :my_loc; /* Assign BLOB to host variable */
455
DEFER(PREPARE)
To improve performance for both static and dynamic SQL used in DB2 private protocol access, and for dynamic SQL in DRDA access, consider specifying the option DEFER(PREPARE) when you bind or rebind your plans or packages. Remember that statically bound SQL statements in DB2 private protocol access are processed dynamically. When a dynamic SQL statement accesses remote data, the PREPARE and EXECUTE statements can be transmitted over the network together and processed at the remote location. Responses to both statements can be sent together back to the local subsystem, thus reducing traffic on the network. DB2 does not prepare the dynamic SQL statement until the statement executes. (The exception to this is dynamic SELECT, which combines PREPARE and DESCRIBE, regardless of whether the DEFER(PREPARE) option is in effect.) All PREPARE messages for dynamic SQL statements that refer to a remote object will be deferred until one of these events occurs: v The statement executes v The application requests a description of the results of the statement In general, when you defer PREPARE, DB2 returns SQLCODE 0 from PREPARE statements. You must therefore code your application to handle any SQL codes that might have been returned from the PREPARE statement after the associated EXECUTE or DESCRIBE statement. When you use predictive governing, the SQL code returned to the requester if the server exceeds a predictive governing warning threshold depends on the level of DRDA at the requester. See Writing an application to handle predictive governing on page 602 for more information. For DB2 private protocol access, when a static SQL statement refers to a remote object, the transparent PREPARE statement and the EXECUTE statements are automatically combined and transmitted across the network together. The PREPARE statement is deferred only if you specify the bind option DEFER(PREPARE). PREPARE statements that contain INTO clauses are not deferred.
PKLIST
The order in which you specify package collections in a package list can affect the performance of your application program. When a local instance of DB2 attempts to execute an SQL statement at a remote server, the local DB2 subsystem must determine which package collection the SQL statement is in. DB2 must send a message to the server, requesting that the server check each collection ID for the SQL statement, until the statement is found or no more collection IDs are in the package list. You can reduce the amount of network traffic, and thereby improve
456
performance, by reducing the number of package collections that each server must search. The following examples show ways to reduce the collections to search: v Reduce the number of packages per collection that must be searched. The following example specifies only one package in each collection:
PKLIST(S1.COLLA.PGM1, S1.COLLB.PGM2)
v Reduce the number of package collections at each location that must be searched. The following example specifies only one package collection at each location:
PKLIST(S1.COLLA.*, S2.COLLB.*)
v Reduce the number of collections that are used for each application. The following example specifies only one collection to search:
PKLIST(*.COLLA.*)
You can also specify the package collection that is associated with an SQL statement in your application program. Execute the SQL statement SET CURRENT PACKAGESET before you execute an SQL statement to tell DB2 which package collection to search for the statement. When you use DEFER(PREPARE) with DRDA access, the package containing the statements whose preparation you want to defer must be the first qualifying entry in the package search sequence that DB2 uses. (See Identifying packages at run time on page 498 for more information.) For example, assume that the package list for a plan contains two entries:
PKLIST(LOCB.COLLA.*, LOCB.COLLB.*)
If the intended package is in collection COLLB, ensure that DB2 searches that collection first. You can do this by executing the SQL statement:
SET CURRENT PACKAGESET = COLLB;
Alternatively, you can list COLLB first in the PKLIST parameter of BIND PLAN:
PKLIST(LOCB.COLLB.*, LOCB.COLLA.*)
For NODEFER(PREPARE), the collections in the package list can be in any order, but if the package is not found in the first qualifying PKLIST entry, the result is significant network overhead for searching through the list. | |
REOPT(ALWAYS)
When you specify REOPT(ALWAYS), DB2 determines access paths at both bind time and run time for statements that contain one or more of the following variables: v Host variables v Parameter markers v Special registers At run time, DB2 uses the values in those variables to determine the access paths.
| | | |
If you specify the bind option REOPT(ALWAYS) or REOPT(ONCE), DB2 sets the bind option DEFER(PREPARE) automatically. However, when you specify REOPT(ONCE), DB2 determines the access path for a statement only once (at the first run time). Because of performance costs when DB2 reoptimizes the access path at run time, you should use one of the following bind options: v REOPT(ALWAYS) use this option only on packages or plans that contain statements that perform poorly because of a bad access path.
Chapter 20. Planning to access distributed data
| |
457
| | | | | | | | |
v REOPT(ONCE) use this option when the following conditions are true: You are using the dynamic statement cache. You have plans or packages that contain dynamic SQL statements that perform poorly because of access path selection. Your dynamic SQL statements are executed many times with possibly different input variables. v REOPT(NONE) use this option when you bind a plan or package that contains statements that use DB2 private protocol access. If you specify REOPT(ALWAYS) when you bind a plan that contains statements that use DB2 private protocol access to access remote data, DB2 prepares those statements twice. See How bind options REOPT(ALWAYS) and REOPT(ONCE) affect dynamic SQL on page 625 for more information about REOPT(ALWAYS).
CURRENTDATA(NO)
Use this bind option to force block fetch for ambiguous queries. See Using block fetch in distributed applications for more information about block fetch.
KEEPDYNAMIC(YES)
Use this bind option to improve performance for queries that use cursors defined WITH HOLD. With KEEPDYNAMIC(YES), DB2 automatically closes the cursor when no more data exists for retrieval. The client does not need to send a network message to tell DB2 to close the cursor. For more information about KEEPDYNAMIC(YES), see Keeping prepared statements after commit points on page 599.
DBPROTOCOL(DRDA)
If the value of installation default DATABASE PROTOCOL is not DRDA, use this bind option to cause DB2 to use DRDA access to execute SQL statements with three-part names. Statements that use DRDA access perform better at execution time because: v Binding occurs when the package is bound, not during program execution. v DB2 does not destroy static statement information at commit time, as it does with DB2 private protocol access. This means that with DRDA access, if a commit occurs between two executions of a statement, DB2 does not need to prepare the statement twice.
458
FOR FETCH ONLY or FOR READ ONLY to the query in the DECLARE CURSOR statement. If you do not use FOR FETCH ONLY or FOR READ ONLY, DB2 still uses block fetch for the query if any of the following conditions are true: v The cursor is a non-scrollable cursor, and the result table of the cursor is read-only. (See Chapter 5 of DB2 SQL Reference for a description of read-only tables.) v The cursor is a scrollable cursor that is declared as INSENSITIVE, and the result table of the cursor is read-only. v The cursor is a scrollable cursor that is declared as SENSITIVE, the result table of the cursor is read-only, and the value of bind option CURRENTDATA is NO. v The result table of the cursor is not read-only, but the cursor is ambiguous, and the value of bind option CURRENTDATA is NO. A cursor is ambiguous when any of the following conditions are true: It is not defined with the clauses FOR FETCH ONLY, FOR READ ONLY, or FOR UPDATE. It is not defined on a read-only result table. It is not the target of a WHERE CURRENT clause on an SQL UPDATE or DELETE statement. It is in a plan or package that contains the SQL statements PREPARE or EXECUTE IMMEDIATE. DB2 does not use continuous block fetch if: v The cursor is referred to in the statement DELETE WHERE CURRENT OF elsewhere in the program. v The cursor statement appears that it can be updated at the requesting system. (DB2 does not check whether the cursor references a view at the server that cannot be updated.)
459
Table 60. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor that is not used for a stored procedure result set Isolation CS, RR, or RS Cursor sensitivity INSENSITIVE CURRENTDATA Yes No SENSITIVE Yes Cursor type Read-only Read-only Read-only Updatable Ambiguous No Read-only Updatable Ambiguous UR INSENSITIVE Yes No SENSITIVE Yes No Read-only Read-only Read-only Read-only Block fetch Yes Yes No No No Yes No Yes Yes Yes Yes Yes
Table 61 summarizes the conditions under which a DB2 server uses block fetch for a scrollable cursor when the cursor is used to retrieve result sets.
Table 61. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor that is used for a stored procedure result set Isolation CS, RR, or RS Cursor sensitivity INSENSITIVE CURRENTDATA Yes No SENSITIVE Yes No UR INSENSITIVE Yes No SENSITIVE Yes No Cursor type Read-only Read-only Read-only Read-only Read-only Read-only Read-only Read-only Block fetch Yes Yes No Yes Yes Yes Yes Yes
When a DB2 UDB for z/OS requester uses a scrollable cursor to retrieve data from a DB2 UDB for z/OS server, the following conditions are true: v The requester never requests more than 64 rows in a query block, even if more rows fit in the query block. In addition, the requester never requests extra query blocks. This is true even if the setting of field EXTRA BLOCKS REQ in the DISTRIBUTED DATA FACILITY PANEL 2 installation panel on the requester allows extra query blocks to be requested. If you want to limit the number of rows that the server returns to fewer than 64, you can specify the FETCH FIRST n ROWS ONLY clause when you declare the cursor. v The requester discards rows of the result table if the application does not use those rows. For example, if the application fetches row n and then fetches row n+2, the requester discards row n+1. The application gets better performance for a blocked scrollable cursor if it mostly scrolls forward, fetches most of the rows in a query block, and avoids frequent switching between FETCH ABSOLUTE statements with negative and positive values.
460
v If the scrollable cursor does not use block fetch, the server returns one row for each FETCH statement.
461
degrade performance if you do not use it properly. The following examples demonstrate the performance problems that can occur when you do not use OPTIMIZE FOR n ROWS judiciously. In Figure 149,, the DRDA client opens a cursor and fetches rows from the cursor. At some point before all rows in the query result set are returned, the application issues an SQL INSERT. DB2 uses normal DRDA blocking, which has two advantages over the blocking that is used for OPTIMIZE FOR n ROWS: v If the application issues an SQL statement other than FETCH (the example shows an INSERT statement), the DRDA client can transmit the SQL statement immediately, because the DRDA connection is not in use after the SQL OPEN. v If the SQL application closes the cursor before fetching all the rows in the query result set, the server fetches only the number of rows that fit in one query block, which is 100 rows of the result set. Basically, the DRDA query block size places an upper limit on the number of rows that are fetched unnecessarily.
DRDA client DECLARE C1 CURSOR FOR SELECT * FROM T1 FOR FETCH ONLY; OPEN C1;
DB2 server
In Figure 150 on page 463,, the DRDA client opens a cursor and fetches rows from the cursor using OPTIMIZE FOR n ROWS. Both the DRDA client and the DB2 server are configured to support multiple DRDA query blocks. At some time before the end of the query result set, the application issues an SQL INSERT. Because OPTIMIZE FOR n ROWS is being used, the DRDA connection is not available when the SQL INSERT is issued because the connection is still being used to receive the DRDA query blocks for 1000 rows of data. This causes two performance problems: v Application elapsed time can increase if the DRDA client waits for a large query result set to be transmitted, before the DRDA connection can be used for other SQL statements. Figure 150 on page 463 shows how an SQL INSERT statement can be delayed because of a large query result set. v If the application closes the cursor before fetching all the rows in the SQL result set, the server might fetch a large number of rows unnecessarily.
462
DRDA client DECLARE C1 CURSOR FOR SELECT * FROM T1 OPTIMIZE FOR 1000 ROWS; OPEN C1;
DB2 server
SQL cursor is opened Query block with 100 rows is returned Query block with 100 rows is returned Query block with 100 rows is returned Query block with 100 rows is returned Query block with 100 rows is returned Query block with 100 rows is returned Query block with 100 rows is returned Query block with 100 rows is returned Query block with 100 rows is returned Query block with 100 rows is returned Server processes INSERT statement
Recommendation: OPTIMIZE FOR n ROWS should be used to increase the number of DRDA query blocks only in applications that have all of these attributes: v The application fetches a large number of rows from a read-only query. v The application rarely closes the SQL cursor before fetching the entire query result set. v The application does not issue statements other than FETCH to the DB2 server while the SQL cursor is open. v The application does not execute FETCH statements for multiple cursors that are open concurrently and defined with OPTIMIZE FOR n ROWS. v The application does not need to scroll randomly through the data. OPTIMIZE FOR n ROWS has no effect on a scrollable cursor. For more information about OPTIMIZE FOR n ROWS, see Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS on page 776.
Chapter 20. Planning to access distributed data
463
For OPTIMIZE FOR n ROWS, when n is 1, 2, or 3, DB2 uses the value 16 (instead of n) for network blocking and prefetches 16 rows. As a result, network usage is more efficient even though DB2 uses the small value of n for query optimization. Suppose that you need only one row of the result table. To avoid 15 unnecessary prefetches, add the FETCH FIRST 1 ROW ONLY clause:
SELECT * FROM EMP OPTIMIZE FOR 1 ROW ONLY FETCH FIRST 1 ROW ONLY;
464
465
server supports them. For a list of those statements, see Appendix H, Characteristics of SQL statements in DB2 UDB for z/OS, on page 1113. v In general, if a statement to be executed at a remote server contains host variables, a DB2 requester assumes them to be input host variables unless it supports the syntax of the statement and can determine otherwise. If the assumption is not valid, the server rejects the statement. | | | | | | | | | | |
466
An application program that uses an SQLDA can specify an overriding CCSID for the returned data in the SQLDA. When the application program executes a FETCH statement, you receive the data in the CCSID that is specified in the SQLDA. See Changing the CCSID for retrieved data on page 619 for information about how to specify an overriding CCSID in an SQLDA.
467
that conversion is performed on a fixed-length input host variable. The remedy is to use a varying-length string variable with a maximum length that is sufficient to contain the expansion.
468
# # # # #
469
Designing a test data structure . . . . . . . . . . Analyzing application data needs . . . . . . . . Obtaining authorization . . . . . . . . . . . . Creating a comprehensive test structure . . . . . . Filling the tables with test data . . . . . . . . . . Testing SQL statements using SPUFI. . . . . . . . . . Debugging your program . . . . . . . . . . . . . Debugging programs in TSO . . . . . . . . . . . Language test facilities . . . . . . . . . . . . The TSO TEST command . . . . . . . . . . . Debugging programs in IMS . . . . . . . . . . . Debugging programs in CICS . . . . . . . . . . . Debugging aids for CICS . . . . . . . . . . . CICS execution diagnostic facility . . . . . . . . Locating the problem . . . . . . . . . . . . . . . Analyzing error and warning messages from the precompiler SYSTERM output from the precompiler . . . . . . . SYSPRINT output from the precompiler . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
557 557 559 559 559 560 560 560 561 561 561 562 562 563 566 567 567 568
Chapter 23. Processing DL/I batch applications . . . . . . . Planning to use DL/I batch applications . . . . . . . . . . Features and functions of DB2 DL/I batch support . . . . . . Requirements for using DB2 in a DL/I batch job . . . . . . Authorization . . . . . . . . . . . . . . . . . . Program design considerations . . . . . . . . . . . . . Address spaces . . . . . . . . . . . . . . . . . Commits . . . . . . . . . . . . . . . . . . . . SQL statements and IMS calls . . . . . . . . . . . . . Checkpoint calls . . . . . . . . . . . . . . . . . Application program synchronization . . . . . . . . . . Checkpoint and XRST considerations . . . . . . . . . . Synchronization call abends . . . . . . . . . . . . . Input and output data sets for DL/I batch jobs . . . . . . . . DB2 DL/I batch input . . . . . . . . . . . . . . . DB2 DL/I batch output . . . . . . . . . . . . . . . Preparation guidelines for DL/I batch programs . . . . . . . Precompiling . . . . . . . . . . . . . . . . . . Binding . . . . . . . . . . . . . . . . . . . . Link-editing . . . . . . . . . . . . . . . . . . Loading and running . . . . . . . . . . . . . . . Submitting a DL/I batch application using DSNMTV01 . . . Submitting a DL/I batch application without using DSNMTV01 Restart and recovery . . . . . . . . . . . . . . . . . JCL example of a batch backout . . . . . . . . . . . . JCL example of restarting a DL/I batch job . . . . . . . . Finding the DL/I batch checkpoint ID . . . . . . . . . .
. . . . . . . . . . . . . . . 573 . . . . . . . . . . . . . . . 573 . . . . . . . . . . . . . . . 573 . . . . . . . . . . . . . . . 574 . . . . . . . . . . . . . . . 574 . . . . . . . . . . . . . . . 574 . . . . . . . . . . . . . . . 574 . . . . . . . . . . . . . . . 575 . . . . . . . . . . . . . . . 575 . . . . . . . . . . . . . . . 575 . . . . . . . . . . . . . . . 575 . . . . . . . . . . . . . . . 575 . . . . . . . . . . . . . . . 576 . . . . . . . . . . . . . . . 576 . . . . . . . . . . . . . . . 576 . . . . . . . . . . . . . . . 578 . . . . . . . . . . . . . . . 578 . . . . . . . . . . . . . . . 578 . . . . . . . . . . . . . . . 578 . . . . . . . . . . . . . . . 579 . . . . . . . . . . . . . . . 579 . . . . . . . . . . . . . . . 579 . . . . . . . . . . . . . . . 580 . . . . . . . . . . . . . . . 580 . . . . . . . . . . . . . . . 581 . . . . . . . . . . . . . . . 581 . . . . . . . . . . . . . . . 582
470
| | |
471
| | | | | | | | | | | | | | | | | |
ability to bind in various DB2 releases and modes. This relationship is affected by the SQL processing option NEWFUN and the new-function mode of DB2 Version 8: v If you use the option NEWFUN(NO), the SQL statements in the DBRM use EBCDIC. As a result, the DBRM is not a DB2 Version 8 object. DB2 Version 7 and earlier releases can bind the DBRM. NEWFUN(NO) causes the compiler to reject any DB2 Version 8 functions. v If you use the option NEWFUN(YES) to process the SQL statements in your program, the SQL statements in the DBRM use Unicode UTF-8. As a result, the DBRM is a DB2 Version 8 object even if the application program does not use any DB2 Version 8 functions. Therefore, the DBRM is Version 8 dependent. DB2 Version 8 can bind the DBRM; DB2 Version 7 and earlier releases cannot bind the DBRM. Version 8 can bind the DBRM even before Version 8 new-function mode if the DBRM does not use any DB2 Version 8 functions. v If the application program uses DB2 Version 8 functions, Version 8 can bind the DBRM only in new-function mode for Version 8 (or later). If the program does not use any DB2 Version 8 functions, Version 8 can bind the DBRM even before Version 8 new-function mode.
NO1 No Yes Yes Yes YES2 Yes No Yes Yes YES3 Yes No No Yes
| Table 62. Dependencies and binding for DB2 Version 8 | Value of NEWFUN | Is the DBRM a DB2 Version 8 object and dependent on DB2 Version 8? | Can DB2 Version 7, or an earlier release, bind the DBRM? | Can DB2 Version 8 bind before DB2 Version 8 new-function mode? | Can DB2 Version 8 bind while in DB2 Version 8 new-function mode? | | | | |
Notes: 1. The DBRM is created with NEWFUN(NO), which prevents the use of DB2 Version 8 functions. 2. The DBRM is created with NEWFUN(YES), although the program does not use any DB2 Version 8 functions. 3. The DBRM is created with NEWFUN(YES), and the program uses DB2 Version 8 functions.
For more information about the NEWFUN option, see Table 64 on page 484. For information about DB2 Version 8 new-function mode, see DB2 Installation Guide. For information about the DSNHPC7 precompiler, see the Installation guide.
472
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CICS If the application contains CICS commands, you must translate the program before you compile it. (See Translating command-level statements in a CICS program on page 493.) # # DB2 version for the DSNHDECP module: When you process SQL statements, the DB2 version in the DSNHDECP data-only load module must match the DB2
Chapter 21. Preparing an application program to run
473
# #
version of the DB2 precompiler or DB2 coprocessor. Otherwise, DB2 issues an error, and SQL statement processing terminates.
Output data set, which Yes contains the SQL statements and host variable information that the DB2 precompiler extracted from the source program. It is called Database Request Module (DBRM). This data set becomes the input to the DB2 bind process. The DCB attributes of the data set are RECFM FB, LRECL 80. If a partitioned data set is allocated to DBRMLIB, the data set name must include a member name.
474
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 63. DD statements and data sets that the DB2 precompiler uses (continued) DD statement STEPLIB Data set description Required?
Step library for the job step. No, but recommended In this DD statement, you can specify the name of the library for the precompiler load module, DSNHPC, and the name of the library for your DB2 application programming defaults member, DSNHDECP. Recommendation: Always use the STEPLIB DD statement to specify the library where your DB2 DSNHDECP module resides to ensure that the proper application defaults are used by the DB2 precompiler. The library that contains your DB2 DSNHDECP module needs to be allocated ahead of the prefix.SDSNLOAD library.
SYSCIN
Output data set, which Yes contains the modified source that the DB2 precompiler writes out. This data set becomes the input data set to the compiler or assembler. This data set must have attributes RECFM F or FB, and LRECL 80. Input data set, which contains statements in the host programming language and embedded SQL statements. This data set must have the attributes RECFM F or FB, LRECL 80. Yes
SYSIN
SYSLIB
INCLUDE library, which No contains additional SQL and host language statements. The DB2 precompiler includes the member or members that are referenced by SQL INCLUDE statements in the SYSIN input from this DD statement. Multiple data sets can be specified, but they must be partitioned data sets with attributes RECFM F or FB, LRECL 80. SQL INCLUDE statements cannot be nested.
475
# # # # # # # # # # # # #
Table 63. DD statements and data sets that the DB2 precompiler uses (continued) DD statement SYSPRINT Data set description Output data set, which contains the output listing from the DB2 precompiler. This data set must have an LRECL of 133 and a RECFM of FBA. Terminal output file, which contains diagnostic messages from the DB2 precompiler. Required? Yes
SYSTERM
No
Input to the precompiler: The primary input for the precompiler consists of statements in the host programming language and embedded SQL statements.
Important The size of a source program that DB2 can precompile is limited by the region size and the virtual memory available to the precompiler. The maximum region size and memory available to the DB2 precompiler is usually around 8 MB, but these amounts vary with each system installation. You can use the SQL INCLUDE statement to get secondary input from the include library, SYSLIB. The SQL INCLUDE statement reads input from the specified member of SYSLIB until it reaches the end of the member. Another preprocessor, such as the PL/I macro preprocessor, can generate source statements for the precompiler. Any preprocessor that runs before the precompiler must be able to pass on SQL statements. Similarly, other preprocessors can process the source code, after you precompile and before you compile or assemble. There are limits on the forms of source statements that can pass through the precompiler. For example, constants, comments, and other source syntax that are not accepted by the host compilers (such as a missing right brace in C) can interfere with precompiler source scanning and cause errors. You might want to run the host compiler before the precompiler to find the source statements that are unacceptable to the host compiler. At this point you can ignore the compiler error messages for SQL statements. After the source statements are free of unacceptable compiler errors, you can then perform the normal DB2 program preparation process for that host language. The following restrictions apply only to the DB2 precompiler: v You must write host language statements and SQL statements using the same margins, as specified in the precompiler option MARGINS. v The input data set, SYSIN, must have the attributes RECFM F or FB, LRECL 80. v SYSLIB must be a partitioned data set, with attributes RECFM F or FB, LRECL 80. v Input from the INCLUDE library cannot contain other precompiler INCLUDE statements. Output from the precompiler: The following sections describe various kinds of output from the precompiler.
476
| |
Listing output: The output data set, SYSPRINT, used to print output from the precompiler, has an LRECL of 133 and a RECFM of FBA. This data set uses the CCSID of the source program. Statement numbers in the output of the precompiler listing always display as they appear in the listing. However, DB2 stores statement numbers greater than 32767 as 0 in the DBRM. The DB2 precompiler writes the following information in the SYSPRINT data set: v Precompiler source listing If the DB2 precompiler option SOURCE is specified, a source listing is produced. The source listing includes precompiler source statements, with line numbers that are assigned by the precompiler. v Precompiler diagnostics The precompiler produces diagnostic messages that include precompiler line numbers of statements that have errors. v Precompiler cross-reference listing If the DB2 precompiler option XREF is specified, a cross-reference listing is produced. The cross-reference listing shows the precompiler line numbers of SQL statements that refer to host names and columns. Terminal diagnostics: If a terminal output file, SYSTERM, is present, the DB2 precompiler writes diagnostic messages to it. A portion of the source statement accompanies the messages in this file. You can often use the SYSTERM file instead of the SYSPRINT file to find errors. This data set uses EBCDIC. Modified source statements: The DB2 precompiler writes the source statements that it processes to SYSCIN, the input data set to the compiler or assembler. This data set must have attributes RECFM F or FB, and LRECL 80. The modified source code contains calls to the DB2 language interface. The SQL statements that the calls replace appear as comments. This data set uses the CCSID of the source program. Database request modules: The major output from the precompiler is a database request module (DBRM). That data set contains the SQL statements and host variable information extracted from the source program, along with information that identifies the program and ties the DBRM to the translated source statements. It becomes the input to the bind process. The data set requires space to hold all the SQL statements plus space for each host variable name and some header information. The header information alone requires approximately two records for each DBRM, 20 bytes for each SQL record, and 6 bytes for each host variable. For an exact format of the DBRM, see the DBRM mapping macros, DSNXDBRM and DSNXNBRM in library prefix.SDSNMACS. The DCB attributes of the data set are RECFM FB, LRECL 80. The precompiler sets the characteristics. You can use IEBCOPY, IEHPROGM, TSO commands COPY and DELETE, or other PDS management tools for maintaining these data sets. In a DBRM, the SQL statements and the list of host variable names use the following character encoding schemes: v EBCDIC, for the result of a DB2 Version 8 precompilation with NEWFUN NO or a precompilation in an earlier release of DB2 v Unicode UTF-8, for the result of a DB2 Version 8 precompilation with NEWFUN YES
| | | | | |
477
All other character fields in a DBRM use EBCDIC. The current release marker (DBRMMRIC) in the header of a DBRM is marked according to the release of the precompiler, regardless of the value of NEWFUN. In a Version 8 precompilation, the DBRM dependency marker (DBRMPDRM) in the header of a DBRM is marked for Version 8 if the value of NEWFUN is YES; otherwise, it is not marked for Version 8. The DB2 language preparation procedures in job DSNTIJMV use the DISP=OLD parameter to enforce data integrity. However, the installation process converts the DISP=OLD parameter for the DBRM library data set to DISP=SHR, which can cause data integrity problems when you run multiple precompilation jobs. If you plan to run multiple precompilation jobs and are not using the DFSMSdfp partitioned data set extended (PDSE), you must change the DB2 language preparation procedures (DSNHCOB, DSNHCOB2, DSNHICOB, DSNHFOR, DSNHC, DSNHPLI, DSNHASM, DSNHSQL) to specify the DISP=OLD parameter instead of the DISP=SHR parameter. Binding on another system: It is not necessary to precompile the program on the same DB2 system on which you bind the DBRM and run the program. In particular, you can bind a DBRM at the current release level and run it on a DB2 subsystem at the previous release level, if the original program does not use any properties of DB2 that are unique to the current release. Of course, you can run applications on the current release that were previously bound on systems at the previous release level. # # # # # # # # # # # # # # # # # # # # # # # # # # #
LIMITS(FIXEDBIN(63), FIXEDDEC(31)) These options are required for LOB support. SIZE(nnnnnn) You might need to increase the SIZE value (nnnnnn) so that the user region is large enough for the DB2 coprocessor. Do not specify SIZE(MAX).
478
# # # # # # # # # # # # # # # # # | | | | | | | | | | | # | | | | | | | | | | | | | | | # # # #
v Include DD statements for the following data sets in the JCL for your compile step: DB2 load library (prefix.SDSNLOAD) The DB2 coprocessor calls DB2 modules to process the SQL statements. You therefore need to include the name of the DB2 load library data set in the STEPLIB concatenation for the compile step. DBRM library The DB2 coprocessor produces a DBRM. DBRMs and the DBRM library are described in Output from the precompiler on page 476. You need to include a DBRMLIB DD statement that specifies the DBRM library data set. Library for SQL INCLUDE statements If your program contains SQL INCLUDE member-name statements that specify secondary input to the source program, you need to include the name of the data set that contains member-name in the SYSLIB concatenation for the compile step.
You might need to increase the SIZE value so that the user region is large enough for the DB2 coprocessor. To increase the SIZE value, specify the following option, where nnnnnn is the SIZE value that you want:
SIZE(nnnnnn)
479
# | | | # # # | # # # | | | | | | | | # # # # # # # # # # #
Do not specify SIZE(MAX). v Include DD statements for the following data sets in the JCL for your compile step: DB2 load library (prefix.SDSNLOAD) The DB2 coprocessor calls DB2 modules to process the SQL statements. You therefore need to include the name of the DB2 load library data set in the STEPLIB concatenation for the compile step. DBRM library The DB2 coprocessor produces a DBRM. DBRMs and the DBRM library are described in Output from the precompiler on page 476. You need to include a DBRMLIB DD statement that specifies the DBRM library data set. Library for SQL INCLUDE statements If your program contains EXEC SQL INCLUDE statements other than EXEC SQL INCLUDE SQLCA and EXEC SQL INCLUDE SQLDA, you need to include the SYSLIB DD statement to indicate the include library and the C header files. Note: When you use both EXEC SQL INCLUDE and #include statements in a C++ program, the member names that you use for the statements must be unique.
# # # #
LIB You need to specify the LIB option when you specify the SQL option, whether or not you have any COPY, BASIS, or REPLACE statements in your program. LIMITS(FIXEDBIN(63), FIXEDDEC(31)) These options are required for LOB support. SIZE(nnnnnn)
480
# #
# # # # # #
You might need to increase the SIZE value (nnnnnn) so that the user region is large enough for the DB2 coprocessor. Do not specify SIZE(MAX). v Include DD statements for the following data sets in the JCL for your compile step: DB2 load library (prefix.SDSNLOAD) The DB2 coprocessor calls DB2 modules to process the SQL statements. You therefore need to include the name of the DB2 load library data set in the STEPLIB concatenation for the compile step. DBRM library The DB2 coprocessor produces a DBRM. DBRMs and the DBRM library are described in Output from the precompiler on page 476. You need to include a DBRMLIB DD statement that specifies the DBRM library data set. Library for SQL INCLUDE statements If your program contains SQL INCLUDE member-name statements that specify secondary input to the source program, you need to include the name of the data set that contains member-name in the SYSLIB concatenation for the compile step.
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
LIMITS(FIXEDBIN(63), FIXEDDEC(31)) These options are required for LOB support. SIZE(nnnnnn) You might need to increase the SIZE value so that the user region is large enough for the processing of the DB2 coprocessor. Do not specify SIZE(MAX). v Include DD statements for the following data sets in the JCL for your compile step:
Chapter 21. Preparing an application program to run
481
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
DB2 load library (prefix.SDSNLOAD) The PL/I SQL preprocessor calls the DB2 coprocessor APIs to do the SQL statement processing. You therefore need to include the name of the DB2 load library data set in the STEPLIB concatenation for the compile step. DBRM library The DB2 coprocessor produces a DBRM. DBRMs and the DBRM library are described in Output from the precompiler on page 476. You need to include a DBRMLIB DD statement that specifies the DBRM library data set. Library for SQL INCLUDE statements If your program contains SQL INCLUDE member-name statements that specify secondary input to the source program, you need to include the name of the data set that contains member-name in the SYSLIB concatenation for the compile step.
EXEC SQL
482
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
DB2 precompiler: In the modified source from the DB2 precompiler, hv1 and hv2 are represented to DB2 through SQLDA in the following way, without CCSIDs:
for hv1: NO CCSID 20 SQL-PVAR-NAMEL1 20 SQL-PVAR-NAMEC1 for hv2: NO CCSID PIC S9(4) COMP-4 VALUE +0. PIC X(30) VALUE PIC S9(4) COMP-4 VALUE +0. PIC X(30) VALUE .
20 SQL-PVAR-NAMEL2 20 SQL-PVAR-NAMEC2
DB2 coprocessor: In the modified source from the DB2 coprocessor with the National Character Support for COBOL, hv1 and hv2 are represented to DB2 in the following way, with CCSIDs: (Assume that the source CCSID is 1140.)
for hv1 and hv2, the value for CCSID is set to 1140 (474x) in input SQLDA of the INSERT statement. 7F00000474000000007Fx
To ensure that no discrepancy exists between the column with FOR BIT DATA and the host variable with CCSID 1140, add the following statement for :hv1 or use the DB2 precompiler:
EXEC SQL DECLARE : hv1 VARIABLE FOR BIT DATA END-EXEC. for hv1 declared with for bit data. The value in set to FFFFx for CCSID instead of 474x. 7F0000FFFF000000007Fx vs. 7F00000474000000007Fx SQL---AVAR-NAME-DATA is
<<= with DECLARE :hv1 VARIABLE FOR BIT DATA <<= without
PL/I DB2 coprocessor: You can specify whether CCSIDs are to be associated with host variables by using the following PL/I SQL preprocessor options: CCSID0 Specifies that the PL/I SQL preprocessor is not to set the CCSIDs for all host variables unless they are defined with the SQL DECLARE :hv VARIABLE statement. NOCCSID0 Specifies that the PL/I SQL preprocessor is to set the CCSIDs for all host variables. For more information about these options, see the IBM Enterprise PL/I for z/OS Programming Guide.
483
If you are using the DB2 precompiler, you can specify SQL processing options in one of the following ways: v With DSNH operands v With the PARM.PC option of the EXEC JCL statement v In DB2I panels # # | # # # If you are using the SQL preprocessor, specify the DB2 coprocessor options in the following way: v For C or C++, specify the options as the argument of the SQL compiler option. v For COBOL, specify the options as the argument of the SQL compiler option. v For PL/I, specify the options as the argument of the PP(SQL(option,...)) compiler option. DB2 assigns default values for any SQL processing options for which you do not explicitly specify a value. Those defaults are the values that are specified in the APPLICATION PROGRAMMING DEFAULTS installation panels. # # # # # Table of SQL processing options: Table 64 shows the options that you can specify when you use the DB2 precompiler or the DB2 coprocessor. The table also includes abbreviations for those options. Not all options apply to all host languages. For information about which options are ignored for a particular host language, see Table 64. Table 64 uses a vertical bar (|) to separate mutually exclusive options, and brackets ([ ]) to indicate that you can sometimes omit the enclosed option.
Table 64. SQL processing options Option keyword Meaning Indicates that the DB2 precompiler is to use the apostrophe () as the string delimiter in host language statements that it generates. This option is not available in all languages; see Table 66 on page 492. APOST and QUOTE are mutually exclusive options. The default is in the field STRING DELIMITER on Application Programming Defaults Panel 1 when DB2 is installed. If STRING DELIMITER is the apostrophe (), APOST is the default. Recognizes the apostrophe () as the string delimiter and the quotation mark () as the SQL escape character within SQL statements. APOSTSQL and QUOTESQL are mutually exclusive options. The default is in the field SQL STRING DELIMITER on Application Programming Defaults Panel 1 when DB2 is installed. If SQL STRING DELIMITER is the apostrophe (), APOSTSQL is the default. ATTACH(TSO|CAF| RRSAF) Specifies the attachment facility that the application uses to access DB2. TSO, CAF, and RRSAF applications that load the attachment facility can use this option to specify the correct attachment facility, instead of coding a dummy DSNHLI entry point. This option is not available for Fortran applications. The default is ATTACH(TSO).
1
# APOST # # # # #
# APOSTSQL # # # # #
484
Table 64. SQL processing options (continued) Option keyword Meaning Specifies the numeric value n of the CCSID in which the source program is written. The number n must be 65535 or in the range 1 through 65533, and must be an EBCDIC CCSID. The default setting is the EBCDIC system CCSID as specified on the panel DSNTIPF during installation. The DB2 coprocessor uses the following process to determine the CCSID of the source statements: 1. If the CCSID of the source program is specified by a compiler option, such as the COBOL CODEPAGE compiler option, the DB2 coprocessor uses that CCSID. a. If the CCSID suboption of the SQL compiler option is specified and contains a valid EBCDIC CCSID, that CCSID is used. b. If the CCSID suboption of the SQL compiler option is not specified, and the compiler supports an option for specifying the CCSID, such as the COBOL CODEPAGE compiler option, the default for that compiler option is used. c. If the CCSID suboption of the SQL compiler option is not specified, and the compiler does not support an option for specifying the CCSID, the default CCSID from DSNHDECP is used. d. If the CCSID suboption of the SQL option is specified and contains an invalid CCSID, compilation terminates. If you specify CCSID(1026) or CCSID(1155), the DB2 coprocessor does not support the code point 'FC'X for the double quotation mark ("). COMMA Recognizes the comma (,) as the decimal point indicator in decimal or floating point literals in the following cases: v For static SQL statements in COBOL programs v For dynamic SQL statements, when the value of installation parameter DYNRULS is NO and the package or plan that contains the SQL statements has DYNAMICRULES bind, define, or invoke behavior. COMMA and PERIOD are mutually exclusive options. The default (COMMA or PERIOD) is chosen under DECIMAL POINT IS on Application Programming Defaults Panel 1 when DB2 is installed. CONNECT(2|1) CT(2|1) Determines whether to apply type 1 or type 2 CONNECT statement rules. CONNECT(2) Default: Apply rules for the CONNECT (Type 2) statement. CONNECT(1) Apply rules for the CONNECT (Type 1) statement If you do not specify the CONNECT option when you precompile a program, the rules of the CONNECT (Type 2) statement apply. See Precompiler options for DRDA access on page 445 for more information about this option, and Chapter 5 of DB2 SQL Reference for a comparison of CONNECT (Type 1) and CONNECT (Type 2).
| CCSID(n) # # | | # # # # # # # # # # # # # # # # # #
485
Table 64. SQL processing options (continued) Option keyword DATE(ISO|USA |EUR|JIS|LOCAL) Meaning Specifies that date output should always return in a particular format, regardless of the format that is specified as the location default. For a description of these formats, see Chapter 2 of DB2 SQL Reference. The default is specified in the field DATE FORMAT on Application Programming Defaults Panel 2 when DB2 is installed.
| | |
The default format is determined by the installation defaults of the system where the program is bound, not by the installation defaults of the system where the program is precompiled. You cannot use the LOCAL option unless you have a date exit routine. DEC(15|31) D(15.s|31.s) Specifies the maximum precision for decimal arithmetic operations. See Using 15-digit and 31-digit precision for decimal numbers on page 16. The default is in the field DECIMAL ARITHMETIC on Application Programming Defaults Panel 1 when DB2 is installed. If the form Dpp.s is specified, pp must be either 15 or 31, and s, which represents the minimum scale to be used for division, must be a number between 1 and 9. FLAG(I|W|E|S)1 Suppresses diagnostic messages below the specified severity level (Informational, Warning, Error, and Severe error for severity codes 0, 4, 8, and 12 respectively). The default setting is FLAG(I). FLOAT(S390|IEEE) Determines whether the contents of floating-point host variables in assembler, C, C++, or PL/I programs are in IEEE floating-point format or System/390 floating-point format. DB2 ignores this option if the value of HOST is anything other than ASM, C, CPP, or PLI. The default setting is FLOAT(S390). GRAPHIC Indicates that the source code might use mixed data, and that X0E and X0F are special control characters (shift-out and shift-in) for EBCDIC data. GRAPHIC and NOGRAPHIC are mutually exclusive options. The default (GRAPHIC or NOGRAPHIC) is chosen under MIXED DATA on Application Programming Defaults Panel 1 when DB2 is installed.
486
Table 64. SQL processing options (continued) Option keyword HOST (ASM|C[(FOLD)]| CPP[(FOLD)]| IBMCOB| PLI|FORTRAN)
1
Meaning Defines the host language containing the SQL statements. Use IBMCOB for Enterprise COBOL for z/OS and OS/390. If you specify COBOL or COB2, a warning message is issued and the precompiler uses IBMCOB. For C, specify: v C if you do not want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers to uppercase v C(FOLD) if you want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers to uppercase For C++, specify: v CPP if you do not want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers to uppercase v CPP(FOLD) if you want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers to uppercase If you omit the HOST option, the DB2 precompiler issues a level-4 diagnostic message and uses the default value for this option. The default is in the field LANGUAGE DEFAULT on Application Programming Defaults Panel 1 when DB2 is installed. This option also sets the language-dependent defaults; see Table 66 on page 492.
| |
LEVEL[(aaaa)] L
Defines the level of a module, where aaaa is any alphanumeric value of up to seven characters. This option is not recommended for general use, and the DSNH CLIST and the DB2I panels do not support it. For more information, see Setting the program level on page 502. For assembler, C, C++, Fortran, and PL/I, you can omit the suboption (aaaa). The resulting consistency token is blank. For COBOL, you need to specify the suboption.
Defines the number of lines per page to be n for the DB2 precompiler listing. This includes header lines inserted by the DB2 precompiler. The default setting is LINECOUNT(60). Specifies what part of each source record contains host language or SQL statements; and, for assembler, where column continuations begin. The first option (m) is the beginning column for statements. The second option (n) is the ending column for statements. The third option (c) specifies for assembler where continuations begin. Otherwise, the DB2 precompiler places a continuation indicator in the column immediately following the ending column. Margin values can range from 1 to 80. Default values depend on the HOST option you specify; see Table 66 on page 492. The DSNH CLIST and the DB2I panels do not support this option. In assembler, the margin option must agree with the ICTL instruction, if presented in the source.
487
Table 64. SQL processing options (continued) Option keyword Meaning Indicates whether to accept the syntax for DB2 Version 8 functions. NEWFUN(YES) causes the precompiler to accept DB2 Version 8 syntax. A successful precompilation produces a DBRM that can be bound only with Version 8 and later releases, even if the DBRM does not use any Version 8 syntax. NEWFUN(NO) causes the precompiler to reject any syntax that DB2 Version 8 introduces. A successful precompilation produces a DBRM that can be bound with any release of DB2, including DB2 Version 8. During migration of DB2 Version 8 from an earlier release, the default is NO. At the end of enabling-new-function mode, the default changes from NO to YES. If Version 8 is a new installation of DB2, the default is YES. For information about enabling-new-function mode during installation, see the DB2 Installation Guide. NOFOR In static SQL, eliminates the need for the FOR UPDATE or FOR UPDATE OF clause in DECLARE CURSOR statements. When you use NOFOR, your program can make positioned updates to any columns that the program has DB2 authority to update. When you do not use NOFOR, if you want to make positioned updates to any columns that the program has DB2 authority to update, you need to specify FOR UPDATE with no column list in your DECLARE CURSOR statements. The FOR UPDATE clause with no column list applies to static or dynamic SQL statements. Whether you use or do not use NOFOR, you can specify FOR UPDATE OF with a column list to restrict updates to only the columns named in the clause and specify the acquisition of update locks. You imply NOFOR when you use the option STDSQL(YES). If the resulting DBRM is very large, you might need extra storage when you specify NOFOR or use the FOR UPDATE clause with no column list. NOGRAPHIC Indicates the use of X0E and X0F in a string, but not as control characters. GRAPHIC and NOGRAPHIC are mutually exclusive options. The default (GRAPHIC or NOGRAPHIC) is chosen under MIXED DATA on Application Programming Defaults Panel 1 when DB2 is installed. NOOPTIONS2 NOOPTN Suppresses the DB2 precompiler options listing. Indicates that output host variables that are NUL-terminated strings are not padded with blanks. That is, additional blanks are not inserted before the NUL-terminator is placed at the end of the string. PADNTSTR and NOPADNTSTR are mutually exclusive options. The default (PADNTSTR or NOPADNTSTR) is chosen under PAD NUL-TERMINATED on Application Programming Defaults Panel 2 when DB2 is installed. NOSOURCE2 NOS NOXREF2 NOX ONEPASS1 ON Suppresses the DB2 precompiler source listing. This is the default. Suppresses the DB2 precompiler cross-reference listing. This is the default. Processes in one pass, to avoid the additional processing time for making two passes. Declarations must appear before SQL references. Default values depend on the HOST option specified; see Table 66 on page 492. ONEPASS and TWOPASS are mutually exclusive options.
| NEWFUN(YES|NO) | | | | | | | | | |
| NOPADNTSTR3
| | |
488
Table 64. SQL processing options (continued) Option keyword OPTIONS OPTN
1
Meaning Lists DB2 precompiler options. This is the default. Indicates that output host variables that are NUL-terminated strings are padded with blanks with the NUL-terminator placed at the end of the string. PADNTSTR and NOPADNTSTR are mutually exclusive options. The default (PADNTSTR or NOPADNTSTR) is chosen under PAD NUL-TERMINATED on Application Programming Defaults Panel 2 when DB2 is installed.
# PADNTSTR3 # | | |
PERIOD
Recognizes the period (.) as the decimal point indicator in decimal or floating point literals in the following cases: v For static SQL statements in COBOL programs v For dynamic SQL statements, when the value of installation parameter DYNRULS is NO and the package or plan that contains the SQL statements has DYNAMICRULES bind, define, or invoke behavior. COMMA and PERIOD are mutually exclusive options. The default (COMMA or PERIOD) is chosen under DECIMAL POINT IS on Application Programming Defaults Panel 1 when DB2 is installed.
# QUOTE1 # Q # # # # # # # #
QUOTESQL
Indicates that the DB2 precompiler is to use the quotation mark () as the string delimiter in host language statements that it generates. QUOTE is valid only for COBOL applications. QUOTE is not valid for either of the following combinations of precompiler options: v CCSID(1026) and HOST(IBMCOB) v CCSID(1155) and HOST(IBMCOB) The default is in the field STRING DELIMITER on Application Programming Defaults Panel 1 when DB2 is installed. If STRING DELIMITER is the quote () or DEFAULT, then QUOTE is the default. APOST and QUOTE are mutually exclusive options. Recognizes the quotation mark () as the string delimiter and the apostrophe () as the SQL escape character within SQL statements. This option applies only to COBOL. The default is in the field SQL STRING DELIMITER on Application Programming Defaults Panel 1 when DB2 is installed. If SQL STRING DELIMITER is the quote () or DEFAULT, QUOTESQL is the default. APOSTSQL and QUOTESQL are mutually exclusive options.
SOURCE S
489
Table 64. SQL processing options (continued) Option keyword SQL(ALL|DB2) Meaning Indicates whether the source contains SQL statements other than those recognized by DB2 UDB for z/OS. SQL(ALL) is recommended for application programs whose SQL statements must execute on a server other that DB2 UDB for z/OS using DRDA access. SQL(ALL) indicates that the SQL statements in the program are not necessarily for DB2 UDB for z/OS. Accordingly, the SQL statement processor then accepts statements that do not conform to the DB2 syntax rules. The SQL statement processor interprets and processes SQL statements according to distributed relational database architecture (DRDA) rules. The SQL statement processor also issues an informational message if the program attempts to use IBM SQL reserved words as ordinary identifiers. SQL(ALL) does not affect the limits of the SQL statement processor. SQL(DB2), the default, means to interpret SQL statements and check syntax for use by DB2 UDB for z/OS. SQL(DB2) is recommended when the database server is DB2 UDB for z/OS.
# SQLFLAG1(IBM|STD
[(ssname [,qualifier])]) STDSQL(NO|YES)4
Indicates to which rules the output statements should conform. STDSQL(YES)3 indicates that the precompiled SQL statements in the source program conform to certain rules of the SQL standard. STDSQL(NO) indicates conformance to DB2 rules. The default is in the field STD SQL LANGUAGE on Application Programming Defaults Panel 2 when DB2 is installed. STDSQL(YES) automatically implies the NOFOR option.
TIME(ISO|USA| EUR|JIS|LOCAL)
Specifies that time output always return in a particular format, regardless of the format that is specified as the location default. For a description of these formats, see Chapter 2 of DB2 SQL Reference. The default is specified in the field TIME FORMAT on Application Programming Defaults Panel 2 when DB2 is installed.
| | |
The default format is determined by the installation defaults of the system where the program is bound, not by the installation defaults of the system where the program is precompiled. You cannot use the LOCAL option unless you have a time exit routine. TWOPASS TW
1
Processes in two passes, so that declarations need not precede references. Default values depend on the HOST option specified; see Table 66 on page 492. ONEPASS and TWOPASS are mutually exclusive options.
490
Table 64. SQL processing options (continued) Option keyword VERSION(aaaa|AUTO) Meaning Defines the version identifier of a package, program, and the resulting DBRM. When you specify VERSION, the SQL statement processor creates a version identifier in the program and DBRM. This affects the size of the load module and DBRM. DB2 uses the version identifier when you bind the DBRM to a plan or package. If you do not specify a version at precompile time, then an empty string is the default version identifier. If you specify AUTO, the SQL statement processor uses the consistency token to generate the version identifier. If the consistency token is a timestamp, the timestamp is converted into ISO character format and used as the version identifier. The timestamp used is based on the System/370 Store Clock value. For information about using VERSION, see Identifying a package version on page 502.
# #
XREF1
For more information about the rules for version identifiers, see in DB2 SQL Reference . Includes a sorted cross-reference listing of symbols used in SQL statements in the listing output.
Notes:
# 1. This option is ignored when the DB2 coprocessor precompiles the application. # 2. This option is always in effect when the DB2 coprocessor precompiles the application.
3. This option applies only for a C or C++ application.
# 4. You can use STDSQL(86) as in prior releases of DB2. The DB2 processor treats it the same as STDSQL(YES).
5. Precompiler options do not affect ODBC behavior.
Defaults for options of the SQL statement processor: Some SQL statement processor options have defaults based on values specified on the Application Programming Defaults panels. Table 65 shows those options and defaults:
Table 65. IBM-supplied installation default SQL statement processing options. The installer can change these defaults. Install option STRING DELIMITER SQL STRING DELIMITER DECIMAL POINT IS DATE FORMAT DECIMAL ARITHMETIC MIXED DATA LANGUAGE DEFAULT Install default quotation mark () quotation mark () PERIOD ISO DEC15 NO COBOL Equivalent SQL statement processing option QUOTE QUOTESQL PERIOD DATE(ISO) DEC(15) NOGRAPHIC HOST(COBOL) Available SQL statement processing options APOST QUOTE APOSTSQL QUOTESQL COMMA PERIOD DATE(ISO|USA| EUR|JIS|LOCAL) DEC(15|31) GRAPHIC NOGRAPHIC HOST(ASM|C[(FOLD)]| CPP[(FOLD)]|IBMCOB| FORTRAN|PLI) STDSQL(YES|NO|86)
NO
STDSQL(NO)
491
Table 65. IBM-supplied installation default SQL statement processing options (continued). The installer can change these defaults. Install option TIME FORMAT Install default ISO Equivalent SQL statement processing option TIME(ISO) Available SQL statement processing options TIME(IS|USA|EUR| JIS|LOCAL)
Notes: For dynamic SQL statements, another application programming default, USE FOR DYNAMICRULES, determines whether DB2 uses the application programming default or the SQL statement processor option for the following install options: v STRING DELIMITER v SQL STRING DELIMITER v DECIMAL POINT IS v DECIMAL ARITHMETIC v MIXED DATA If the value of USE FOR DYNAMICRULES is YES, then dynamic SQL statements use the application programming defaults. If the value of USE FOR DYNAMICRULES is NO, then dynamic SQL statements in packages or plans with bind, define, and invoke behavior use the SQL statement processor options. See Using DYNAMICRULES to specify behavior of dynamic SQL statements on page 502 for an explanation of bind, define, and invoke behavior.
Some SQL statement processor options have default values based on the host language. Some options do not apply to some languages. Table 66 shows the language-dependent options and defaults.
Table 66. Language-dependent DB2 precompiler options and defaults HOST value ASM C or CPP IBMCOB FORTRAN PLI Notes: 1. Forced for this language; no alternative allowed. 2. The default is chosen on Application Programming Defaults Panel 1 when DB2 is installed. The IBM-supplied installation defaults for string delimiters are QUOTE (host language delimiter) and QUOTESQL (SQL escape character). The installer can replace the IBM-supplied defaults with other defaults. The precompiler options you specify override any defaults in effect. Defaults APOST1, APOSTSQL1, PERIOD1, TWOPASS, MARGINS(1,71,16) APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(1,72) QUOTE2, QUOTESQL2, PERIOD, ONEPASS1, MARGINS(8,72)1 APOST1, APOSTSQL1, PERIOD1, ONEPASS1, MARGINS(1,72)1 APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(2,72)
SQL statement processing defaults for dynamic statements: Generally, dynamic statements use the defaults that are specified during installation. However, if the value of DSNHDECP parameter DYNRULS is NO, then you can use these options for dynamic SQL statements in packages or plans with bind, define, or invoke behavior: v COMMA or PERIOD v APOST or QUOTE v APOSTSQL or QUOTESQL v GRAPHIC or NOGRAPHIC v DEC(15) or DEC(31)
492
493
CICS (continued) You can run CICS applications only from CICS address spaces. This restriction applies to the RUN option on the second program DSN command processor. All of those possibilities occur in TSO. You can append JCL from a job created by the DB2 Program Preparation panels to the CICS translator JCL to prepare an application program. To run the prepared program under CICS, you might need to define programs and transactions to CICS. Your system programmer must make the appropriate CICS resource or table entries. For information on the required resource entries, see Part 2 of DB2 Installation Guide and CICS Transaction Server for z/OS Resource Definition Guide. prefix.SDSNSAMP contains examples of the JCL used to prepare and run a CICS program that includes SQL statements. For a list of CICS program names and JCL member names, see Table 192 on page 1018. The set of JCL includes: v PL/I macro phase v DB2 precompiling v CICS Command Language Translation v Compiling the host language source statements v Link-editing the compiler output v Binding the DBRM v Running the prepared application.
TSO and batch Include the DB2 TSO attachment facility language interface module (DSNELI) or DB2 call attachment facility language interface module (DSNALI). For a program that uses 31-bit addressing, link-edit the program with the AMODE=31 and RMODE=ANY options. For more details, see the appropriate z/OS publication.
494
IMS Include the DB2 IMS (Version 1 Release 3 or later) language interface module (DFSLI000). Also, the IMS RESLIB must precede the SDSNLOAD library in the link list, JOBLIB, or STEPLIB concatenations.
CICS Include the DB2 CICS language interface module (DSNCLI). You can link DSNCLI with your program in either 24 bit or 31 bit addressing mode (AMODE=31). If your application program runs in 31-bit addressing mode, you should link-edit the DSNCLI stub to your application with the attributes AMODE=31 and RMODE=ANY so that your application can run above the 16M line. For more information on compiling and link-editing CICS application programs, see the appropriate CICS manual. You also need the CICS EXEC interface module appropriate for the programming language. CICS requires that this module be the first control section (CSECT) in the final load module. The size of the executable load module that is produced by the link-edit step varies depending on the values that the SQL statement processor inserts into the source code of the program. For more information about compiling and link-editing, see Using JCL procedures to prepare applications on page 512. For more information about link-editing attributes, see the appropriate z/OS manuals. For details on DSNH, see Part 3 of DB2 Command Reference.
Exception You do not need to bind a DBRM if the only SQL statement in the program is SET CURRENT PACKAGESET. Because you do not need a plan or package to execute the SET CURRENT PACKAGESET statement, the ENCODING bind option does not affect the SET CURRENT PACKAGESET statement. An application that needs to provide a host variable value in an encoding scheme other than the system default encoding scheme must use the DECLARE VARIABLE statement to specify the encoding scheme of the host variable.
495
You must bind plans locally, whether or not they reference packages that run remotely. However, you must bind the packages that run at remote locations at those remote locations. From a DB2 requester, you can run a plan by naming it in the RUN subcommand, but you cannot run a package directly. You must include the package in a plan and then run the plan.
Then, include the remote package in the package list of a local plan, say PLANB, by using:
BIND PLAN (PLANB) PKLIST(PARIS.GROUP1.PROGA)
The ENCODING bind option has the following effect on a remote application: v If you bind a package locally, which is recommended, and you specify the ENCODING bind option for the local package, the ENCODING bind option for the local package applies to the remote application. v If you do not bind a package locally, and you specify the ENCODING bind option for the plan, the ENCODING bind option for the plan applies to the remote application. v If you do not specify an ENCODING bind option for the package or plan at the local site, the value of APPLICATION ENCODING that was specified on installation panel DSNTIPF at the local site applies to the remote application.
496
When you bind or rebind, DB2 checks authorizations, reads and updates the catalog, and creates the package in the directory at the remote site. DB2 does not read or update catalogs or check authorizations at the local site. If you specify the option EXPLAIN(YES) and you do not specify the option SQLERROR(CONTINUE), then PLAN_TABLE must exist at the location specified on the BIND or REBIND subcommand. This location could also be the default location. If you bind with the option COPY, the COPY privilege must exist locally. DB2 performs authorization checking, reads and updates the catalog, and creates the package in the directory at the remote site. DB2 reads the catalog records related to the copied package at the local site. If the local site is installed with time or date format LOCAL, and a package is created at a remote site using the COPY option, the COPY option causes DB2 at the remote site to convert values returned from the remote site in ISO format, unless an SQL statement specifies a different format. After you bind a package, you can rebind, free, or bind it with the REPLACE option using either a local or a remote bind. Turning an existing plan into packages to run remotely: If you have used DB2 before, you might have an existing application that you want to run at a remote location, using DRDA access. To do that, you need to rebind the DBRMs in the current plan as packages at the remote location. You also need a new plan that includes those remote packages in its package list. Follow these instructions for each remote location: 1. Choose a name for a collection to contain all the packages in the plan, say REMOTE1. (You can use more than one collection if you like, but one is enough.) 2. Assuming that the server is a DB2 system, at the remote location execute: a. GRANT CREATE IN COLLECTION REMOTE1 TO authorization-name; b. GRANT BINDADD TO authorization-name; where authorization-name is the owner of the package. 3. Bind each DBRM as a package at the remote location, using the instructions under Binding packages at a remote location on page 496. Before run time, the package owner must have all the data access privileges needed at the remote location. If the owner does not yet have those privileges when you are binding, use the VALIDATE(RUN) option. The option lets you create the package, even if the authorization checks fail. DB2 checks the privileges again at run time. 4. Bind a new application plan at your local DB2, using these options:
PKLIST (location-name.REMOTE1.*) CURRENTSERVER (location-name)
where location-name is the value of LOCATION, in SYSIBM.LOCATIONS at your local DB2, that denotes the remote location at which you intend to run. You do not need to bind any DBRMs directly to that plan: the package list is sufficient. When you now run the existing application at your local DB2, using the new application plan, these things happen: v You connect immediately to the remote location named in the CURRENTSERVER option.
Chapter 21. Preparing an application program to run
497
v When about to run a package, DB2 searches for it in the collection REMOTE1 at the remote location. v Any UPDATE, DELETE, or INSERT statements in your application affect tables at the remote location. v Any results from SELECT statements return to your existing application program, which processes them as though they came from your local DB2.
You can include as many DBRMs in a plan as you wish. However, if you use a large number of DBRMs in a plan (more than 500, for example), you could have trouble maintaining the plan. To ease maintenance, you can bind each DBRM separately as a package, specifying the same collection for all packages bound, and then bind a plan specifying that collection in the plans package list. If the design of the application prevents this method, see if your system administrator can increase the size of the EDM pool to be at least 10 times the size of either the largest database descriptor (DBD) or the plan, whichever is greater. Including packages in a package list: To include packages in the package list of a plan, list them after the PKLIST keyword of BIND PLAN. To include an entire collection of packages in the list, use an asterisk after the collection name. For example,
PKLIST(GROUP1.*)
To bind DBRMs directly to the plan, and also include packages in the package list, use both MEMBER and PKLIST. The following example includes: v The DBRMs PROG1 and PROG2 v All the packages in a collection called TEST2 v The packages PROGA and PROGC in the collection GROUP1
MEMBER(PROG1,PROG2) PKLIST(TEST2.*,GROUP1.PROGA,GROUP1.PROGC)
You must specify MEMBER, PKLIST, or both options. The plan that results consists of one of the following: v Programs associated with DBRMs in the MEMBER list only v Programs associated with packages and collections identified in PKLIST only v A combination of the specifications on MEMBER and PKLIST
498
You also need other identifiers. The consistency token alone uniquely identifies a DBRM bound directly to a plan, but it does not necessarily identify a unique package. When you bind DBRMs directly to a particular plan, you bind each one only once. But you can bind the same DBRM to many packages, at different locations and in different collections, and then you can include all those packages in the package list of the same plan. All those packages will have the same consistency token. As you might expect, there are ways to specify a particular location or a particular collection at run time. Identifying the location: When your program executes an SQL statement, DB2 uses the value in the CURRENT SERVER special register to determine the location of the necessary package or DBRM. If the current server is your local DB2 subsystem and it does not have a location name, the value is blank. You can change the value of CURRENT SERVER by using the SQL CONNECT statement in your program. If you do not use CONNECT, the value of CURRENT SERVER is the location name of your local DB2 subsystem (or blank, if your DB2 has no location name). Identifying the collection: You can use the special register CURRENT PACKAGE PATH or CURRENT PACKAGESET (if CURRENT PACKAGE PATH is not set) to specify the collections that are to be used for package resolution. The CURRENT PACKAGESET special register contains the name of a single collection, and the CURRENT PACKAGE PATH special register contains a list of collection names. If you do not set these registers, they are blank when your application begins to run and remain blank. In this case, DB2 searches the available collections, using methods described in Specifying the package list for the PKLIST option of BIND PLAN. However, explicitly specifying the intended collection by using the special registers can avoid a potentially costly search through a package list with many qualifying entries. In addition, DB2 uses the values in these special registers for applications that do not run under a plan. How DB2 uses these special registers is described in Using the special registers on page 500. When you call a stored procedure, the special register CURRENT PACKAGESET contains the value that you specified for the COLLID parameter when you defined the stored procedure. When the stored procedure returns control to the calling program, DB2 restores this register to the value that it contained before the call. Specifying the package list for the PKLIST option of BIND PLAN: The order in which you specify packages in a package list can affect run-time performance. Searching for the specific package involves searching the DB2 directory, which can be costly. When you use collection-id.* with the PKLIST keyword, you should specify first the collections in which DB2 is most likely to find a package. For example, assume that you perform the following bind:
BIND PLAN (PLAN1) PKLIST (COLL1.*, COLL2.*, COLL3.*, COLL4.*)
| | | | | | | | | | | | | | | | | | |
Then you execute program PROG1. DB2 does the following package search: 1. Checks to see if program PROG1 is bound as part of the plan 2. Searches for COLL1.PROG1.timestamp 3. If it does not find COLL1.PROG1.timestamp, searches for COLL2.PROG1.timestamp
Chapter 21. Preparing an application program to run
499
4. If it does not find COLL2.PROG1.timestamp, searches for COLL3.PROG1.timestamp 5. If it does not find COLL3.PROG1.timestamp, searches for COLL4.PROG1.timestamp. When both special registers CURRENT PACKAGE PATH and CURRENT PACKAGESET are blank: If you do not set these special registers, DB2 searches for a DBRM or a package in one of these sequences: v At the local location (if CURRENT SERVER is blank or names that location explicitly), the order is: 1. All DBRMs that are bound directly to the plan. 2. All packages that are already allocated to the plan while the plan is running. 3. All unallocated packages that are explicitly named in, and all collections that are completely included in, the package list of the plan. DB2 searches for packages in the order that they appear in the package list. v At a remote location, the order is: 1. All packages that are already allocated to the plan at that location while the plan is running. 2. All unallocated packages that are explicitly named in, and all collections that are completely included in, the package list of the plan, whose locations match the value of CURRENT SERVER. DB2 searches for packages in the order that they appear in the package list. If you use the BIND PLAN option DEFER(PREPARE), DB2 does not search all collections in the package list. See Using bind options to improve performance for distributed applications on page 456 for more information. If the order of search is not important: In many cases, DB2s order of search is not important to you and does not affect performance. For an application that runs only at your local DB2, you can name every package differently and include them all in the same collection. The package list on your BIND PLAN subcommand can read:
PKLIST (collection.*)
You can add packages to the collection even after binding the plan. DB2 lets you bind packages having the same package name into the same collection only if their version IDs are different. If your application uses DRDA access, you must bind some packages at remote locations. Use the same collection name at each location, and identify your package list as:
PKLIST (*.collection.*)
| | | | | |
If you use an asterisk for part of a name in a package list, DB2 checks the authorization for the package to which the name resolves at run time. To avoid the checking at run time in the preceding example, you can grant EXECUTE authority for the entire collection to the owner of the plan before you bind the plan. Using the special registers: If you set the special register CURRENT PACKAGE PATH or CURRENT PACKAGESET, DB2 skips the check for programs that are part of a plan and uses the values in these registers for package resolution. If you set CURRENT PACKAGE PATH, DB2 uses the value of CURRENT PACKAGE PATH as the collection name list for package resolution. For example, if
500
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CURRENT PACKAGE PATH contains the list COLL1, COLL2, COLL3, COLL4, then DB2 searches for the first package that exists in the following order: COLL1.PROG1.timestamp COLL2.PROG1.timestamp COLL3.PROG1.timestamp COLL4.PROG1.timestamp If you set CURRENT PACKAGESET and not CURRENT PACKAGE PATH, DB2 uses the value of CURRENT PACKAGESET as the collection for package resolution. For example, if CURRENT PACKAGESET contains COLL5, then DB2 uses COLL5.PROG1.timestamp for the package search. When CURRENT PACKAGE PATH is set, the server that receives the request ignores the collection that is specified by the request and instead uses the value of CURRENT PACKAGE PATH at the server to resolve the package. Specifying a collection list with the CURRENT PACKAGE PATH special register can avoid the need to issue multiple SET CURRENT PACKAGESET statements to switch collections for the package search. Table 67 shows examples of the relationship between the CURRENT PACKAGE PATH special register and the CURRENT PACKAGESET special register.
Table 67. Scope of CURRENT PACKAGE PATH Example SET CURRENT PACKAGESET SELECT ... FROM T1 ... SET CURRENT PACKAGE PATH SELECT ... FROM T1 ... SET CURRENT PACKAGESET SET CURRENT PACKAGE PATH SELECT ... FROM T1 ... SET CURRENT PACKAGE PATH CONNECT TO S2 ... SELECT ... FROM T1 ... SET CURRENT PACKAGE PATH = A,B CONNECT TO S2 ... SET CURRENT PACKAGE PATH = X,Y SELECT ... FROM T1 ... SET CURRENT PACKAGE PATH SELECT ... FROM S2.QUAL.T1 ... What happens The collection in PACKAGESET determines which package is invoked. The collections in PACKAGE PATH determine which package is invoked. The collections in PACKAGE PATH determine which package is invoked. PACKAGE PATH at server S2 is an empty string because it has not been explicitly set. The values from the PKLIST bind option of the plan that is at the requester determine which package is invoked.1 The collections in PACKAGE PATH that are set at server S2 determine which package is invoked.
Three-part table name. On implicit connection to server S2, PACKAGE PATH at server S2 is inherited from the local server. The collections in PACKAGE PATH at server S2 determine which package is invoked.
Notes: 1. When CURRENT PACKAGE PATH is set at the requester (and not at the remote server), DB2 passes one collection at a time from the list of collections to the remote server until a package is found or until the end of the list. Each time a package is not found at the server, DB2 returns an error to the requester. The requester then sends the next collection in the list to the remote server.
501
Identifying a package version: Sometimes, however, you want to have more than one package with the same name available to your plan. The VERSION option makes that possible. Using VERSION identifies your program with a specific version of a package. If you bind the plan with PKLIST (COLLECT.*), then you can do this:
Step number 1 2 For Version 1 Precompile program 1, using VERSION(1). Bind the DBRM with the collection name COLLECT and your chosen package name (say, PACKA). Link-edit program 1 into your application. Run the application; it uses program 1 and PACKA, VERSION 1. For Version 2 Precompile program 2, using VERSION(2). Bind the DBRM with the collection name COLLECT and package name PACKA. Link-edit program 2 into your application. Run the application; it uses program 2 and PACKA, VERSION 2.
3 4
You can do that with many versions of the program, without having to rebind the application plan. Neither do you have to rename the plan or change any RUN subcommands that use it. Setting the program level: To override DB2s construction of the consistency token, use the LEVEL (aaaa) option. DB2 uses the value you choose for aaaa to generate the consistency token. Although we do not recommend this method for general use and the DSNH CLIST or the DB2 Program Preparation panels do not support it, it allows you to do the following: 1. Change the source code (but not the SQL statements) in the DB2 precompiler output of a bound program. 2. Compile and link-edit the changed program. 3. Run the application without rebinding a plan or package.
502
A package that runs under a stored procedure or user-defined function is a package whose associated program meets one of the following conditions: The program is called by a stored procedure or user-defined function. The program is in a series of nested calls that start with a stored procedure or user-defined function. The combination of the DYNAMICRULES value and the run-time environment determine the values for the dynamic SQL attributes. That set of attribute values is called the dynamic SQL statement behavior. The four behaviors are: v Run behavior v Bind behavior v Define behavior v Invoke behavior Table 68 shows the combination of DYNAMICRULES value and run-time environment that yield each dynamic SQL behavior.
Table 68. How DYNAMICRULES and the run-time environment determine dynamic SQL statement behavior Behavior of dynamic SQL statements in a stand-alone program environment Bind behavior Run behavior Bind behavior Run behavior Bind behavior Run behavior Behavior of dynamic SQL statements in a user-defined function or stored procedure environment Bind behavior Run behavior Define behavior Define behavior Invoke behavior Invoke behavior
Note: The BIND and RUN values can be specified for packages and plans. The other values can be specified only for packages.
Table 69 shows the dynamic SQL attribute values for each type of dynamic SQL behavior.
Table 69. Definitions of dynamic SQL statement behaviors Setting for dynamic SQL attributes Dynamic SQL attribute Authorization ID Bind behavior Plan or package owner Bind OWNER or QUALIFIER value Not applicable Determined by DSNHDECP parameter DYNRULS3 Run behavior Current SQLID Define behavior User-defined function or stored procedure owner User-defined function or stored procedure owner Not applicable Determined by DSNHDECP parameter DYNRULS3 Invoke behavior Authorization ID of invoker1 Authorization ID of invoker Not applicable Determined by DSNHDECP parameter DYNRULS3
503
Table 69. Definitions of dynamic SQL statement behaviors (continued) Setting for dynamic SQL attributes Dynamic SQL attribute Can execute GRANT, REVOKE, CREATE, ALTER, DROP, RENAME? Notes: 1. If the invoker is the primary authorization ID of the process or the CURRENT SQLID value, secondary authorization IDs will also be checked if they are needed for the required authorization. Otherwise, only one ID, the ID of the invoker, is checked for the required authorization. 2. DB2 uses the value of CURRENT SQLID as the authorization ID for dynamic SQL statements only for plans and packages that have run behavior. For the other dynamic SQL behaviors, DB2 uses the authorization ID that is associated with each dynamic SQL behavior, as shown in this table. The value to which CURRENT SQLID is initialized is independent of the dynamic SQL behavior. For stand-alone programs, CURRENT SQLID is initialized to the primary authorization ID. See Table 41 on page 342 and Table 79 on page 647 for information about initialization of CURRENT SQLID for user-defined functions and stored procedures. You can execute the SET CURRENT SQLID statement to change the value of CURRENT SQLID for packages with any dynamic SQL behavior, but DB2 uses the CURRENT SQLID value only for plans and packages with run behavior. Bind behavior No Run behavior Yes Define behavior No Invoke behavior No
3. The value of DSNHDECP parameter DYNRULS, which you specify in field USE FOR DYNAMICRULES in installation panel DSNTIP4, determines whether DB2 uses the SQL statement processing options or the application programming defaults for dynamic SQL statements. See Options for SQL statement processing on page 483 for more information.
For more information about DYNAMICRULES, see Chapter 2 of DB2 SQL Reference and Part 3 of DB2 Command Reference. Determining the optimal authorization cache size: When DB2 determines that you have the EXECUTE privilege on a plan, package collection, stored procedure, or user-defined function, DB2 can cache your authorization ID. When you run the plan, package, stored procedure, or user-defined function, DB2 can check your authorization more quickly. Determining the authorization cache size for plans: The CACHESIZE option (optional) allows you to specify the size of the cache to acquire for the plan. DB2 uses this cache for caching the authorization IDs of those users that are running a plan. An authorization ID can take up to 128 bytes of storage. DB2 uses the CACHESIZE value to determine the amount of storage to acquire for the authorization cache. DB2 acquires storage from the EDM storage pool. The default CACHESIZE value is 1024 or the size set at installation time. The size of the cache you specify depends on the number of individual authorization IDs actively using the plan. Required overhead takes 32 bytes, and each authorization ID takes up 8 bytes of storage. The minimum cache size is 256 bytes (enough for 28 entries and overhead information) and the maximum is 4096 bytes (enough for 508 entries and overhead information). You should specify size in multiples of 256 bytes; otherwise, the specified value rounds up to the next highest value that is a multiple of 256. If you run the plan infrequently, or if authority to run the plan is granted to PUBLIC, you might want to turn off caching for the plan so that DB2 does not use unnecessary storage. To do this, specify a value of 0 for the CACHESIZE option.
504
Any plan that you run repeatedly is a good candidate for tuning using the CACHESIZE option. Also, if you have a plan that a large number of users run concurrently, you might want to use a larger CACHESIZE. Determining the authorization cache size for packages: DB2 provides a single package authorization cache for an entire DB2 subsystem. The DB2 installer sets the size of the package authorization cache by entering a size in field PACKAGE AUTH CACHE of DB2 installation panel DSNTIPP. A 32-KB authorization cache is large enough to hold authorization information for about 375 package collections. See DB2 Installation Guide for more information about setting the size of the package authorization cache. Determining the authorization cache size for stored procedures and user-defined functions: DB2 provides a single routine authorization cache for an entire DB2 subsystem. The routine authorization cache stores a list of authorization IDs that have the EXECUTE privilege on user-defined functions or stored procedures. The DB2 installer sets the size of the routine authorization cache by entering a size in field ROUTINE AUTH CACHE of DB2 installation panel DSNTIPP. A 32-KB authorization cache is large enough to hold authorization information for about 380 stored procedures or user-defined functions. See DB2 Installation Guide for more information about setting the size of the routine authorization cache. Specifying the SQL rules: Not only does SQLRULES specify the rules under which a type 2 CONNECT statement executes, but it also sets the initial value of the special register CURRENT RULES when the database server is the local DB2. When the server is not the local DB2, the initial value of CURRENT RULES is DB2. After binding a plan, you can change the value in CURRENT RULES in an application program using the statement SET CURRENT RULES. CURRENT RULES determines the SQL rules, DB2 or SQL standard, that apply to SQL behavior at run time. For example, the value in CURRENT RULES affects the behavior of defining check constraints using the statement ALTER TABLE on a populated table: v If CURRENT RULES has a value of STD and no existing rows in the table violate the check constraint, DB2 adds the constraint to the table definition. Otherwise, an error occurs and DB2 does not add the check constraint to the table definition. If the table contains data and is already in a check pending status, the ALTER TABLE statement fails. v If CURRENT RULES has a value of DB2, DB2 adds the constraint to the table definition, defers the enforcing of the check constraints, and places the table space or partition in check pending status. You can use the statement SET CURRENT RULES to control the action that the statement ALTER TABLE takes. Assuming that the value of CURRENT RULES is initially STD, the following SQL statements change the SQL rules to DB2, add a check constraint, defer validation of that constraint and place the table in check pending status, and restore the rules to STD.
EXEC SQL SET CURRENT RULES = DB2; EXEC SQL
505
ALTER TABLE DSN8810.EMP ADD CONSTRAINT C1 CHECK (BONUS <= 1000.0); EXEC SQL SET CURRENT RULES = STD;
See Using check constraints on page 259 for information about check constraints. You can also use CURRENT RULES in host variable assignments, for example:
SET :XRULE = CURRENT RULES;
You can also use CURRENT RULES as the argument of a search-condition, for example:
SELECT * FROM SAMPTBL WHERE COL1 = CURRENT RULES;
506
The following scenario illustrates thread association for a task that runs program MAIN: Sequence of SQL Statements Events 1. EXEC CICS START TRANSID(MAIN) TRANSID(MAIN) executes program MAIN. 2. EXEC SQL SELECT... Program MAIN issues an SQL SELECT statement. The default dynamic plan exit selects plan MAIN. 3. EXEC CICS LINK PROGRAM(PROGA) 4. EXEC SQL SELECT... DB2 does not call the default dynamic plan exit, because the program does not issue a sync point. The plan is MAIN.
507
CICS (continued) Sequence of SQL Statements Events 5. EXEC CICS LINK PROGRAM(PROGB) 6. EXEC SQL SELECT... DB2 does not call the default dynamic plan exit, because the program does not issue a sync point. The plan is MAIN and the program uses package PKGB. 7. EXEC CICS SYNCPOINT DB2 calls the dynamic plan exit when the next SQL statement executes. 8. EXEC CICS LINK PROGRAM(PROGC) 9. EXEC SQL SELECT... DB2 calls the default dynamic plan exit and selects PLANC. 10. EXEC SQL SET CURRENT SQLID = ABC 11. EXEC CICS SYNCPOINT DB2 does not call the dynamic plan exit when the next SQL statement executes, because the previous statement modifies the special register CURRENT SQLID. 12. EXEC CICS RETURN Control returns to program PROGB. 13. EXEC SQL SELECT... SQLCODE -815 occurs because the plan is currently PLANC and the program is PROGB.
508
v Running the BIND, REBIND, and FREE subcommands on DB2 plans and packages for your program v Using SPUFI (SQL Processor Using File Input) to test some of the SQL functions in the program The DSN command processor runs with the TSO terminal monitor program (TMP). Because the TMP runs in either foreground or background, DSN applications run interactively or as batch jobs. The DSN command processor can provide these services to a program that runs under it: v Automatic connection to DB2 v Attention key support v Translation of return codes into error messages Limitations of the DSN command processor: When using DSN services, your application runs under the control of DSN. Because TSO executes the ATTACH macro to start DSN, and DSN executes the ATTACH macro to start a part of itself, your application gains control two task levels below that of TSO. Because your program depends on DSN to manage your connection to DB2: v If DB2 is down, your application cannot begin to run. v If DB2 terminates, your application also terminates. v An application can use only one plan. If these limitations are too severe, consider having your application use the call attachment facility or Resource Recovery Services attachment facility. For more information about these attachment facilities, see Chapter 30, Programming for the call attachment facility, on page 861 and Chapter 31, Programming for the Resource Recovery Services attachment facility, on page 893. DSN return code processing: At the end of a DSN session, register 15 contains the highest value placed there by any DSN subcommand used in the session or by any program run by the RUN subcommand. Your run-time environment might format that value as a return code. The value does not, however, originate in DSN.
. . . (Here the program runs and might prompt you for input) DSN Prompt: DSN Enter: END TSO Prompt: READY
509
This sequence also works in ISPF option 6. You can package this sequence in a CLIST. DB2 does not support access to multiple DB2 subsystems from a single address space. The PARMS keyword of the RUN subcommand allows you to pass parameters to the run-time processor and to your application program:
PARMS (/D01, D02, D03)
The slash (/) indicates that you are passing parameters. For some languages, you pass parameters and run-time options in the form PARMS(parameters/run-timeoptions). In those environments, an example of the PARMS keyword might be:
PARMS (D01, D02, D03/)
Check your host language publications for the correct form of the PARMS option.
v The JOB option identifies this as a job card. The USER option specifies the DB2 authorization ID of the user. v The EXEC statement calls the TSO Terminal Monitor Program (TMP). v The STEPLIB statement specifies the library in which the DSN Command Processor load modules and the default application programming defaults module, DSNHDECP, reside. It can also reference the libraries in which user applications, exit routines, and the customized DSNHDECP module reside. The customized DSNHDECP module is created during installation. v Subsequent DD statements define additional files needed by your program. v The DSN command connects the application to a particular DB2 subsystem. v The RUN subcommand specifies the name of the application program to run. v The PLAN keyword specifies plan name. v The LIB keyword specifies the library the application should access. v The PARMS keyword passes parameters to the run-time processor and the application program. v END ends the DSN command processor. Usage notes:
510
v Keep DSN job steps short. v We recommend that you not use DSN to call the EXEC command processor to run CLISTs that contain ISPEXEC statements; results are unpredictable. v If your program abends or gives you a non-zero return code, DSN terminates. v You can use a group attachment name instead of a specific ssid to connect to a member of a data sharing group. For more information, see DB2 Data Sharing: Planning and Administration. For more information about using the TSO TMP in batch mode, see z/OS TSO/E User's Guide.
IMS To run a message-driven program First, be sure you can respond to the programs interactive requests for data and that you can recognize the expected results. Then, enter the transaction code associated with the program. Users of the transaction code must be authorized to run the program. To run a non-message-driven program Submit the job control statements needed to run the program.
511
CICS To run a program First, ensure that the corresponding entries in the SNT and RACF* control areas allow run authorization for your application. The system administrator is responsible for these functions; see Part 3 (Volume 1) of DB2 Administration Guide for more information. Also, be sure to define to CICS the transaction code assigned to your program and the program itself. Make a new copy of the program Issue the NEWCOPY command if CICS has not been reinitialized since the program was last bound and compiled.
The SYSEXEC data set contains your REXX application, and the SYSTSIN data set contains the command that you use to invoke the application.
512
or assembler. The procedures are in prefix.SDSNSAMP member DSNTIJMV, which installs the procedures. # # # # # # # # # # # # # # # #
Table 70. Procedures for precompiling programs Language High-level assembler C C++ Enterprise COBOL for z/OS Fortran PL/I SQL Notes: 1. You must customize these programs to invoke the procedures listed in this table. For information about how to do that, see Part 2 of DB2 Installation Guide. 2. This procedure demonstrates how you can prepare an object-oriented program that consists of two data sets or members, both of which contain SQL. Procedure DSNHASM DSNHC DSNHCPP DSNHCPP22 DSNHICOB DSNHFOR DSNHPLI DSNHSQL Invocation included in... DSNTEJ2A DSNTEJ2D DSNTEJ2E N/A DSNTEJ2C1 DSNTEJ2F DSNTEJ2P DSNTEJ63
If you use the PL/I macro processor, you must not use the PL/I *PROCESS statement in the source to pass options to the PL/I compiler. You can specify the needed options on the PARM.PLI= parameter of the EXEC statement in the DSNHPLI procedure.
member must be DSNELI, except for FORTRAN, in which case member must be DSNHFT.
513
IMS
//LKED.SYSIN DD * INCLUDE SYSLIB(DFSLI000) ENTRY (specification) /*
DFSLI000 is the module for DL/I batch attach. ENTRY specification varies depending on the host language. Include one of the following: DLITCBL, for COBOL applications PLICALLA, for PL/I applications Your programs name, for assembler language applications.
CICS
//LKED.SYSIN DD * INCLUDE SYSLIB(DSNCLI) /*
For more information on required CICS modules, see Step 2: Compile (or assemble) and link-edit the application on page 494.
514
Table 71. DDNAME list entries (continued) Entry 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Standard ddname Not applicable Not applicable SYSLIB SYSIN SYSPRINT Not applicable SYSUT1 SYSUT2 SYSUT3 Not applicable SYSTERM Not applicable SYSCIN Not applicable DBRMLIB DBRM output Changed source output Diagnostic listing Work data Work data Work data Library input Source input Diagnostic listing Usage
515
CICS Instead of using the DB2 Program Preparation panels to prepare your CICS program, you can tailor CICS-supplied JCL procedures to do that. To tailor a CICS procedure, you need to add some steps and change some DD statements. Make changes as needed to do the following: v Process the program with the DB2 precompiler. v Bind the application plan. You can do this any time after you precompile the program. You can bind the program either online by the DB2I panels or as a batch step in this or another z/OS job. v Include a DD statement in the linkage editor step to access the DB2 load library. v Be sure the linkage editor control statements contain an INCLUDE statement for the DB2 language interface module. The following example illustrates the necessary changes. This example assumes the use of a COBOL program. For any other programming language, change the CICS procedure name and the DB2 precompiler options. //TESTC01 JOB //* //********************************************************* //* DB2 PRECOMPILE THE COBOL PROGRAM //********************************************************* //PC EXEC PGM=DSNHPC, // PARM=HOST(COB2),XREF,SOURCE,FLAG(I),APOST //STEPLIB DD DISP=SHR,DSN=prefix.SDSNEXIT // DD DISP=SHR,DSN=prefix.SDSNLOAD //DBRMLIB DD DISP=OLD,DSN=USER.DBRMLIB.DATA(TESTC01) //SYSCIN DD DSN=&&DSNHOUT,DISP=(MOD,PASS),UNIT=SYSDA, // SPACE=(800,(500,500)) //SYSLIB DD DISP=SHR,DSN=USER.SRCLIB.DATA //SYSPRINT DD SYSOUT=* //SYSTERM DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSUT1 DD SPACE=(800,(500,500),,,ROUND),UNIT=SYSDA //SYSUT2 DD SPACE=(800,(500,500),,,ROUND),UNIT=SYSDA //SYSIN DD DISP=SHR,DSN=USER.SRCLIB.DATA(TESTC01) //*
(1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1)
516
CICS (continued) //******************************************************************** //*** BIND THIS PROGRAM. //******************************************************************** //BIND EXEC PGM=IKJEFT01, // COND=((4,LT,PC)) //STEPLIB DD DISP=SHR,DSN=prefix.SDSNEXIT // DD DISP=SHR,DSN=prefix.SDSNLOAD //DBRMLIB DD DISP=OLD,DSN=USER.DBRMLIB.DATA(TESTC01) //SYSPRINT DD SYSOUT=* //SYSTSPRT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN S(DSN) BIND PLAN(TESTC01) MEMBER(TESTC01) ACTION(REP) RETAIN ISOLATION(CS) END //******************************************************************** //* COMPILE THE COBOL PROGRAM //******************************************************************** //CICS EXEC DFHEITVL //TRN.SYSIN DD DSN=&&DSNHOUT,DISP=(OLD,DELETE) //LKED.SYSLMOD DD DSN=USER.RUNLIB.LOAD //LKED.CICSLOAD DD DISP=SHR,DSN=prefix.SDFHLOAD //LKED.SYSIN DD * INCLUDE CICSLOAD(DSNCLI) NAME TESTC01(R) //********************************************************************
(2) (2) (2) (2) (2) (2) (2) (2) (2) (2) (2) (2)
The procedure accounts for these steps: Step 1. Precompile the program. Step 2. Bind the application plan. Step 3. Call the CICS procedure to translate, compile, and link-edit a COBOL program. This procedure has several options you need to consider. Step 4. The output of the DB2 precompiler becomes the input to the CICS command language translator. Step 5. Reflect an application load library in the data set name of the SYSLMOD DD statement. You must include the name of this load library in the DFHRPL DD statement of the CICS run-time JCL. Step 6. Name the CICS load library that contains the module DSNCLI. Step 7. Direct the linkage editor to include the CICS-DB2 language interface module (DSNCLI). In this example, the order of the various control sections (CSECTs) is of no concern because the structure of the procedure automatically satisfies any order requirements. For more information about the procedure DFHEITVL, other CICS procedures, or CICS requirements for application programs, please see the appropriate CICS manual.
If you are preparing a particularly large or complex application, you can use one of the last two techniques mentioned previously. For example, if your program requires four of your own link-edit include libraries, you cannot prepare the program with DB2I, because DB2I limits the number of include libraries to three, plus language, IMS or CICS, and DB2 libraries. Therefore, you would need another preparation method. Programs using the call attachment facility can use either of the last two techniques mentioned previously. Be careful to use the correct language interface.
517
v More than one data set or member contains SQL statements. You must precompile the contents of each data set or member separately, but the prelinker must receive all of the compiler output together. # # JCL procedure DSNHCPP2, which is in member DSNTIJMV of data set DSN810.SDSNSAMP, shows you one way to do this for C++.
DB2I help
| | The online help facility enables you to read information about how to use DB2I in an online DB2 book from a DB2I panel. It contains detailed information about the fields of each of the DB2 Program Preparation panels. For instructions on setting up DB2 online help, see the discussion of setting up DB2 online help in Part 2 of DB2 Installation Guide. If your site makes use of CD-ROM updates, you can make the updated books accessible from DB2I. Select Option 10 on the DB2I Defaults panel and enter the new book data set names. You must have write access to prefix.SDSNCLST to perform this function. | | | To access DB2I HELP, press the PF key that is associated with the HELP function. The default PF key for HELP is PF 1; however, your location might have assigned a different PF key for HELP.
518
SSID: DSN
Select one of the following DB2 functions and press ENTER. 1 2 3 4 5 6 7 8 D X SPUFI DCLGEN PROGRAM PREPARATION PRECOMPILE BIND/REBIND/FREE RUN DB2 COMMANDS UTILITIES DB2I DEFAULTS EXIT (Process SQL statements) (Generate SQL and source language declarations) (Prepare a DB2 application program to run) (Invoke DB2 precompiler) (BIND, REBIND, or FREE plans or packages) (RUN an SQL program) (Issue DB2 commands) (Invoke DB2 utilities) (Set global parameters) (Leave DB2I)
Figure 152. Initiating program preparation through DB2I. Specify Program Preparation on the DB2I Primary Option Menu.
The following descriptions explain the functions on the DB2I Primary Option Menu. 1 SPUFI Lets you develop and execute one or more SQL statements interactively. For further information, see Chapter 5, Using SPUFI to execute SQL from your workstation, on page 59. 2 DCLGEN Lets you generate C, COBOL, or PL/I data declarations of tables. For further information, see Chapter 8, Generating declarations for your tables using DCLGEN, on page 131. 3 PROGRAM PREPARATION Lets you prepare and run an application program to run. For more information, see DB2 Program Preparation panel on page 521. 4 PRECOMPILE Lets you convert embedded SQL statements into statements that your host language can process. For further information, see Precompile panel on page 528. 5 BIND/REBIND/FREE Lets you bind, rebind, or free a package or application plan. For more information, see Bind/Rebind/Free selection panel on page 531. 6 RUN Lets you run an application program in a TSO or batch environment. For more information, see The Run panel on page 555. 7 DB2 COMMANDS Lets you issue DB2 commands. For more information about DB2 commands, see Part 3 of DB2 Command Reference. 8 UTILITIES Lets you call DB2 utility programs. For more information, see DB2 Utility Guide and Reference. D DB2I DEFAULTS Lets you set DB2I defaults. See DB2I Defaults Panel 1 on page 526. X EXIT Lets you exit DB2I.
519
To prepare a new application, beginning with precompilation and working through each of the subsequent preparation steps, begin by selecting the option that corresponds to the Program Preparation panel. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Panels for entering lists of values on page 552 The Defaults for Bind or Rebind Package or Plan panels on page 546 System Connection Types panel on page 550 Bind Plan panel on page 536 Bind Package panel on page 533 DB2I Defaults Panel 1 on page 526 DB2I Defaults Panel 2 on page 527
Table 72 describes each of the panels you will need to use to prepare an application. The DB2I help contains detailed descriptions of each panel.
Table 72. DB2I panels used for program preparation Panel name DB2 Program Preparation panel on page 521 Panel description The DB2 Program Preparation panel lets you choose specific program preparation functions to perform. For the functions you choose, you can also display the associated panels to specify options for performing those functions. This panel also lets you change the DB2I default values and perform other precompile and prelink functions. DB2I Defaults Panel 1 lets you change many of the system defaults that are set at DB2 installation time. DB2I Defaults Panel 2 lets you change your default job statement and set additional COBOL options.
Precompile panel on page The Precompile panel lets you specify values for precompile 528 functions. You can reach this panel directly from the DB2I Primary Option Menu, or from the DB2 Program Preparation panel. If you reach this panel from the Program Preparation panel, many of the fields contain values from the Primary and Precompile panels. The Bind Package panel lets you change many options when you bind a package. You can reach this panel directly from the DB2I Primary Option Menu, or from the DB2 Program Preparation panel. If you reach this panel from the DB2 Program Preparation panel, many of the fields contain values from the Primary and Precompile panels. The Bind Plan panel lets you change options when you bind an application plan. You can reach this panel directly from the DB2I Primary Option Menu, or as a part of the program preparation process. This panel also follows the Bind Package panels. These panels let you change the defaults for BIND or REBIND PACKAGE or PLAN. The System Connection Types panel lets you specify a system connection type. This panel displays if you choose to enable or disable connections on the Bind or Rebind Package or Plan panels. These panels are list panels that lets you enter or modify an unlimited number of values. A list panel looks similar to an ISPF edit session and lets you scroll and use a limited set of commands.
520
| | | | | | | | | |
Table 72. DB2I panels used for program preparation (continued) Panel name Program Preparation: Compile, Link, and Run panel on page 553 Panel description This panel lets you perform the last two steps in the program preparation process (compile and link-edit). It also lets you do the PL/I MACRO PHASE for programs that require this option. For TSO programs, the panel also lets you run programs.
Table 73 describes additional panels that you can use to Rebind and Free packages and plans. It also describes the Run panel, which you can use to run application programs that have already been prepared.
Table 73. DB2I panels used to Rebind and Free Plans and packages and used to Run application programs Panel Bind/Rebind/Free selection panel on page 531 Rebind Package panel on page 539 Rebind Trigger Package panel on page 540 Rebind Plan panel on page 542 Free Package panel on page 544 Panel description The BIND/REBIND/FREE panel lets you select the BIND, REBIND, or FREE, PLAN, PACKAGE, or TRIGGER PACKAGE process that you need. The Rebind Package panel lets you change options when you rebind a package. The Rebind Trigger Package panel lets you change options when you rebind a trigger package. The Rebind Plan panel lets you change options when you rebind an application plan. The Free Package panel lets you change options when you free a package.
Free Plan panel on page The Free Plan panel lets you change options when you free an 545 application plan. The Run panel on page 555 The Run panel lets you start an application program. You should use this panel if you have already prepared the program and you only want to run it. You can also run a program by using the Program Prep: Compile, Prelink, Link, and Run panel.
521
TSO and batch For TSO programs, you can use the program preparation programs to control the host language run-time processor and the program itself. The Program Preparation panel also lets you change the DB2I default values (see page 526), and perform other precompile and prelink functions. On the DB2 Program Preparation panel, shown in Figure 153, enter the name of the source program data set (this example uses SAMPLEPG.COBOL) and specify the other options you want to include. When finished, press ENTER to view the next panel.
DSNEPP01 COMMAND ===>_ DB2 PROGRAM PREPARATION SSID: DSN
Enter the following: 1 INPUT DATA SET NAME .... 2 DATA SET NAME QUALIFIER 3 PREPARATION ENVIRONMENT 4 RUN TIME ENVIRONMENT ... 5 OTHER DSNH OPTIONS .....
SAMPLEPG.COBOL TEMP (For building data set names) FOREGROUND (FOREGROUND, BACKGROUND, EDITJCL) TSO (TSO, CAF, CICS, IMS, RRSAF) (Optional DSNH keywords) Perform function? ===> ===> ===> ===> ===> ===> ===> ===> ===> N Y N Y Y Y N Y Y (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N)
Select functions: Display 6 CHANGE DEFAULTS ........ ===> Y 7 PL/I MACRO PHASE ....... ===> N 8 PRECOMPILE ............. ===> Y 9 CICS COMMAND TRANSLATION 10 BIND PACKAGE ........... ===> Y 11 BIND PLAN............... ===> Y 12 COMPILE OR ASSEMBLE .... ===> Y 13 PRELINK................. ===> N 14 LINK.................... ===> N 15 RUN..................... ===> N
panel? (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N)
Figure 153. The DB2 Program Preparation panel. Enter the source program data set name and other options.
The following explains the functions on the DB2 Program Preparation panel and how to fill in the necessary fields in order to start program preparation. 1 INPUT DATA SET NAME Lets you specify the input data set name. The input data set name can be a PDS or a sequential data set, and can also include a member name. If you do not enclose the data set name in apostrophes, a standard TSO prefix (user ID) qualifies the data set name. The input data set name you specify is used to precompile, bind, link-edit, and run the program. 2 DATA SET NAME QUALIFIER Lets you qualify temporary data set names involved in the program preparation process. Use any character string from 1 to 8 characters that conforms to normal TSO naming conventions. (The default is TEMP.) For programs that you prepare in the background or that use EDITJCL for the PREPARATION ENVIRONMENT option, DB2 creates a data set named tsoprefix.qualifier.CNTL to contain the program preparation JCL. The name tsoprefix represents the prefix TSO assigns, and qualifier represents the value you enter in the DATA SET NAME QUALIFIER field. If a data set with this name already exists, DB2 deletes it.
522
3 PREPARATION ENVIRONMENT Lets you specify whether program preparation occurs in the foreground or background. You can also specify EDITJCL, in which case you are able to edit and then submit the job. Use: FOREGROUND to use the values you specify on the Program Preparation panel and to run immediately. BACKGROUND to create and submit a file containing a DSNH CLIST that runs immediately using the JOB control statement from either the DB2I Defaults panel or your sites SUBMIT exit. The file is saved. EDITJCL to create and open a file containing a DSNH CLIST in edit mode. You can then submit the CLIST or save it. 4 RUN TIME ENVIRONMENT Lets you specify the environment (TSO, CAF, CICS, IMS, RRSAF) in which your program runs. All programs are prepared under TSO, but can run in any of the environments. If you specify CICS, IMS, or RRSAF, then you must set the RUN field to NO because you cannot run such programs from the Program Preparation panel. If you set the RUN field to YES, you can specify only TSO or CAF. (Batch programs also run under the TSO Terminal Monitor Program. You therefore need to specify TSO in this field for batch programs.) 5 OTHER DSNH OPTIONS Lets you specify a list of DSNH options that affect the program preparation process, and that override options specified on other panels. If you are using CICS, these can include options you want to specify to the CICS command translator. If you specify options in this field, separate them by commas. You can continue listing options on the next line, but the total length of the option list can be no more than 70 bytes. For more information about those options, see DSNH in Part 3 of DB2 Command Reference. Fields 6 through 15 let you select the function to perform and to choose whether to show the DB2I panels for the functions you select. Use Y for YES, or N for NO. If you are willing to accept default values for all the steps, enter N under Display panel? for all the other preparation panels listed. To make changes to the default values, entering Y under Display panel? for any panel you want to see. DB2I then displays each of the panels that you request. After all the panels display, DB2 proceeds with the steps involved in preparing your program to run. Variables for all functions used during program preparation are maintained separately from variables entered from the DB2I Primary Option Menu. For example, the bind plan variables you enter on the Program Preparation panel are saved separately from those on any Bind Plan panel that you reach from the Primary Option Menu. 6 CHANGE DEFAULTS Lets you specify whether to change the DB2I defaults. Enter Y in the Display panel? field next to this option; otherwise enter N. Minimally, you
523
should specify your subsystem identifier and programming language on the Defaults panel. For more information, see DB2I Defaults Panel 1 on page 526. 7 PL/I MACRO PHASE Lets you specify whether to display the Program Preparation: Compile, Link, and Run panel to control the PL/I macro phase by entering PL/I options in the OPTIONS field of that panel. That panel also displays for options COMPILE OR ASSEMBLE, LINK, and RUN. This field applies to PL/I programs only. If your program is not a PL/I program or does not use the PL/I macro processor, specify N in the Perform function field for this option, which sets the Display panel? field to the default N. For information about PL/I options, see Program Preparation: Compile, Link, and Run panel on page 553. 8 PRECOMPILE Lets you specify whether to display the Precompile panel. To see this panel enter Y in the Display panel? field next to this option; otherwise enter N. For information about the Precompile panel, see Precompile panel on page 528. 9 CICS COMMAND TRANSLATION Lets you specify whether to use the CICS command translator. This field applies to CICS programs only.
IMS and TSO If you run under TSO or IMS, ignore this step; this allows the Perform function field to default to N.
CICS If you are using CICS and have precompiled your program, you must translate your program using the CICS command translator. The command translator does not have a separate DB2I panel. You can specify translation options on the Other Options field of the DB2 Program Preparation panel, or in your source program if it is not an assembler program. Because you specified a CICS run-time environment, the Perform function column defaults to Y. Command translation takes place automatically after you precompile the program. 10 BIND PACKAGE Lets you specify whether to display the Bind Package panel. To see it, enter Y in the Display panel? field next to this option; otherwise, enter N. For information about the panel, see Bind Package panel on page 533. 11 BIND PLAN Lets you specify whether to display the Bind Plan panel. To see it, enter Y in the Display panel? field next to this option; otherwise, enter N. For information about the panel, see Bind Plan panel on page 536.
524
12 COMPILE OR ASSEMBLE Lets you specify whether to display the Program Preparation: Compile, Link, and Run panel. To see this panel enter Y in the Display panel? field next to this option; otherwise, enter N. For information about the panel, see Program Preparation: Compile, Link, and Run panel on page 553. 13 PRELINK Lets you use the prelink utility to make your C, C++, or Enterprise COBOL for z/OS program reentrant. This utility concatenates compile-time initialization information from one or more text decks into a single initialization unit. To use the utility, enter Y in the Display panel? field next to this option; otherwise, enter N. If you request this step, then you must also request the compile step and the link-edit step. For more information about the prelink utility, see z/OS Language Environment Programming Guide. 14 LINK Lets you specify whether to display the Program Preparation: Compile, Link, and Run panel. To see it, enter Y in the Display panel? field next to this option; otherwise, enter N. If you specify Y in the Display panel? field for the COMPILE OR ASSEMBLE option, you do not need to make any changes to this field; the panel displayed for COMPILE OR ASSEMBLE is the same as the panel displayed for LINK. You can make the changes you want to affect the link-edit step at the same time you make the changes to the compile step. For information about the panel, see Program Preparation: Compile, Link, and Run panel on page 553. 15 RUN Lets you specify whether to run your program. The RUN option is available only if you specify TSO or CAF for RUN TIME ENVIRONMENT. If you specify Y in the Display panel? field for the COMPILE OR ASSEMBLE or LINK option, you can specify N in this field, because the panel displayed for COMPILE OR ASSEMBLE and for LINK is the same as the panel displayed for RUN.
IMS and CICS IMS and CICS programs cannot run using DB2I. If you are using IMS or CICS, use N in these fields.
TSO and batch If you are using TSO and want to run your program, you must enter Y in the Perform function column next to this option. You can also indicate that you want to specify options and values to affect the running of your program, by entering Y in the Display panel column. For information on the panel, see Program Preparation: Compile, Link, and Run panel on page 553. Pressing ENTER takes you to the first panel in the series you specified, in this example to the DB2I Defaults panel. If, at any point in your progress from panel to
Chapter 21. Preparing an application program to run
525
panel, you press the END key, you return to this first panel, from which you can change your processing specifications. Asterisks (*) in the Display panel? column of rows 7 through 14 indicate which panels you have already examined. You can see a panel again by writing a Y over an asterisk.
Change defaults as desired: 1 2 3 4 5 6 7 8 9 10 DB2 NAME ............. ===> DB2 CONNECTION RETRIES ===> APPLICATION LANGUAGE ===> LINES/PAGE OF LISTING ===> MESSAGE LEVEL ........ ===> SQL STRING DELIMITER ===> DECIMAL POINT ........ ===> STOP IF RETURN CODE >= ===> NUMBER OF ROWS ===> CHANGE HELP BOOK NAMES?===> DSN 0 IBMCOB 60 I DEFAULT . 8 20 NO (Subsystem identifier) (How many retries for DB2 connection) (ASM, C, CPP, IBMCOB, FORTRAN, PLI) (A number from 5 to 999) (Information, Warning, Error, Severe) (DEFAULT, or ") (. or ,) (Lowest terminating return code) (For ISPF Tables) (YES to change HELP data set names)
The following explains the fields on DB2I Defaults Panel 1. 1 DB2 NAME Lets you specify the DB2 subsystem that processes your DB2I requests. If you specify a different DB2 subsystem, its identifier displays in the SSID (subsystem identifier) field located at the top, right side of your screen. The default is DSN. 2 DB2 CONNECTION RETRIES Lets you specify the number of additional times to attempt to connect to DB2, if DB2 is not up when the program issues the DSN command. The program preparation process does not use this option. Use a number from 0 to 120. The default is 0. Connections are attempted at 30-second intervals. 3 APPLICATION LANGUAGE Lets you specify the default programming language for your application program. You can specify any of the following: ASM For High Level Assembler/z/OS C For C language CPP For C++ IBMCOB For Enterprise COBOL for z/OS and OS/390. This option is the default. FORTRAN For VS Fortran
526
PLI For PL/I If you specify IBMCOB, DB2 prompts you for more COBOL defaults on panel DSNEOP02. See DB2I Defaults Panel 2. You cannot specify FORTRAN for IMS or CICS programs. 4 LINES/PAGE OF LISTING Lets you specify the number of lines to print on each page of listing or SPUFI output. The default is 60. 5 MESSAGE LEVEL Lets you specify the lowest level of message to return to you during the BIND phase of the preparation process. Use: I For all information, warning, error, and severe error messages W For warning, error, and severe error messages E For error and severe error messages S For severe error messages only 6 SQL STRING DELIMITER Lets you specify the symbol used to delimit a string in SQL statements in COBOL programs. This option is valid only when the application language is IBMCOB. Use: DEFAULT To use the default defined at installation time For an apostrophe For a quotation mark 7 DECIMAL POINT Lets you specify how your host language source program represents decimal separators and how SPUFI displays decimal separators in its output. Use a comma (,) or a period (.). The default is a period (.). 8 STOP IF RETURN CODE >= Lets you specify the smallest value of the return code (from precompile, compile, link-edit, or bind) that will prevent later steps from running. Use: 4 To stop on warnings and more severe errors. 8 To stop on errors and more severe errors. The default is 8. 9 NUMBER OF ROWS Lets you specify the default number of input entry rows to generate on the initial display of ISPF panels. The number of rows with non-blank entries determines the number of rows that appear on later displays. 10 CHANGE HELP BOOK NAMES? Lets you change the name of the BookManager book you reference for online help. The default is NO. Suppose that the default programming language is PL/I and the default number of lines per page of program listing is 60. Your program is in COBOL, so you want to change field 3, APPLICATION LANGUAGE. You also want to print 80 lines to the page, so you need to change field 4, LINES/PAGE OF LISTING, as well. Figure 154 on page 526 shows the entries that you make in DB2I Defaults Panel 1 to make these changes. In this case, pressing ENTER takes you to DB2 Defaults Panel 2.
527
three fields are displayed. Otherwise, only the first field is displayed. Figure 155 shows the DB2I Defaults Panel 2 when IBMCOB is selected.
DSNEOP02 COMMAND ===>_ DB2I DEFAULTS PANEL 2
Change defaults as desired: 1 DB2I ===> ===> ===> ===> JOB STATEMENT: (Optional if your site has a SUBMIT exit) //USRT001A JOB (ACCOUNT),NAME //* //* //* (For IBMCOB) (DEFAULT, or ") (G/N - Character in PIC clause)
|
2 3
COBOL DEFAULTS: COBOL STRING DELIMITER ===> DEFAULT DBCS SYMBOL FOR DCLGEN ===> G
1 DB2I JOB STATEMENT Lets you change your default job statement. Specify a job control statement, and optionally, a JOBLIB statement to use either in the background or the EDITJCL program preparation environment. Use a JOBLIB statement to specify run-time libraries that your application requires. If your program has a SUBMIT exit routine, DB2 uses that routine. If that routine builds a job control statement, you can leave this field blank. 2 COBOL STRING DELIMITER Lets you specify the symbol used to delimit a string in a COBOL statement in a COBOL application. Use: DEFAULT To use the default defined at install time For an apostrophe For a quotation mark Leave this field blank to accept the default value. 3 DBCS SYMBOL FOR DCLGEN Lets you enter either G (the default) or N, to specify whether DCLGEN generates a picture clause that has the form PIC G(n) DISPLAY-1 or PIC N(n). Leave this field blank to accept the default value. Pressing ENTER takes you to the next panel you specified on the DB2 Program Preparation panel, in this case, to the Precompile panel.
Precompile panel
The next step in the process is to precompile. Figure 152 on page 519, the DB2I Primary Option Menu, shows that you can reach the Precompile panel in two ways: you can either specify it as a part of the program preparation process from the DB2 Program Preparation panel, or you can reach it directly from the DB2I Primary Option Menu. The way you choose to reach the panel determines the default values of the fields it contains. Figure 156 on page 529 shows the Precompile panel.
528
PRECOMPILE
SSID: DSN
Enter precompiler data sets: 1 INPUT DATA SET .... ===> SAMPLEPG.COBOL 2 INCLUDE LIBRARY ... ===> SRCLIB.DATA 3 4 DSNAME QUALIFIER .. ===> TEMP DBRM DATA SET ..... ===> (For building data set names)
Enter processing options as desired: 5 WHERE TO PRECOMPILE ===> FOREGROUND 6 VERSION ........... ===> 7 OTHER OPTIONS ..... ===>
Figure 156. The Precompile panel. Specify the include library, if any, that your program should use, and any other options you need.
The following explains the functions on the Precompile panel, and how to enter the fields for preparing to precompile. 1 INPUT DATA SET Lets you specify the data set name of the source program and SQL statements to precompile. If you reached this panel through the DB2 Program Preparation panel, this field contains the data set name specified there. You can override it on this panel if you wish. If you reached this panel directly from the DB2I Primary Option Menu, you must enter the data set name of the program you want to precompile. The data set name can include a member name. If you do not enclose the data set name with apostrophes, a standard TSO prefix (user ID) qualifies the data set name. 2 INCLUDE LIBRARY Lets you enter the name of a library containing members that the precompiler should include. These members can contain output from DCLGEN. If you do not enclose the name in apostrophes, a standard TSO prefix (user ID) qualifies the name. You can request additional INCLUDE libraries by entering DSNH CLIST parameters of the form PnLIB(dsname), where n is 2, 3, or 4) on the OTHER OPTIONS field of this panel or on the OTHER DSNH OPTIONS field of the Program Preparation panel. 3 DSNAME QUALIFIER Lets you specify a character string that qualifies temporary data set names during precompile. Use any character string from 1 to 8 characters in length that conforms to normal TSO naming conventions. If you reached this panel through the DB2 Program Preparation panel, this field contains the data set name qualifier specified there. You can override it on this panel if you wish. If you reached this panel from the DB2I Primary Option Menu, you can either specify a DSNAME QUALIFIER or let the field take its default value, TEMP.
529
IMS and TSO For IMS and TSO programs, DB2 stores the precompiled source statements (to pass to the compile or assemble step) in a data set named tsoprefix.qualifier.suffix. A data set named tsoprefix.qualifier.PCLIST contains the precompiler print listing. For programs prepared in the background or that use the PREPARATION ENVIRONMENT option EDITJCL (on the DB2 Program Preparation panel), a data set named tsoprefix.qualifier.CNTL contains the program preparation JCL. In these examples, tsoprefix represents the prefix TSO assigns, often the same as the authorization ID. qualifier represents the value entered in the DSNAME QUALIFIER field. suffix represents the output name, which is one of the following: COBOL, FORTRAN, C, PLI, ASM, DECK, CICSIN, OBJ, or DATA. In the example in Figure 156 on page 529, the data set tsoprefix.TEMP.COBOL contains the precompiled source statements, and tsoprefix.TEMP.PCLIST contains the precompiler print listing. If data sets with these names already exist, then DB2 deletes them.
CICS For CICS programs, the data set tosprefix.qualifier.suffix receives the precompiled source statements in preparation for CICS command translation. If you do not plan to do CICS command translation, the source statements in tsoprefix.qualifier.suffix, are ready to compile. The data set tsoprefix.qualifier.PCLIST contains the precompiler print listing. When the precompiler completes its work, control passes to the CICS command translator. Because there is no panel for the translator, translation takes place automatically. The data set tsoprefix.qualifier.CXLIST contains the output from the command translator. 4 DBRM DATA SET Lets you name the DBRM library data set for the precompiler output. The data set can also include a member name. When you reach this panel, the field is blank. When you press ENTER, however, the value contained in the DSNAME QUALIFIER field of the panel, concatenated with DBRM, specifies the DBRM data set: qualifier.DBRM. You can enter another data set name in this field only if you allocate and catalog the data set before doing so. This is true even if the data set name that you enter corresponds to what is otherwise the default value of this field. The precompiler sends modified source code to the data set qualifier.host, where host is the language specified in the APPLICATION LANGUAGE field of DB2I Defaults panel 1.
530
5 WHERE TO PRECOMPILE Lets you indicate whether to precompile in the foreground or background. You can also specify EDITJCL, in which case you are able to edit and then submit the job. If you reached this panel from the DB2 Program Preparation panel, the field contains the preparation environment specified there. You can override that value if you wish. If you reached this panel directly from the DB2I Primary Option Menu, you can either specify a processing environment or allow this field to take its default value. Use: FOREGROUND to immediately precompile the program with the values you specify in these panels. BACKGROUND to create and immediately submit to run a file containing a DSNH CLIST using the JOB control statement from either DB2I Defaults Panel 2 or your sites SUBMIT exit. The file is saved. EDITJCL to create and open a file containing a DSNH CLIST in edit mode. You can then submit the CLIST or save it. 6 VERSION Lets you specify the version of the program and its DBRM. If the version contains the maximum number of characters permitted (64), you must enter each character with no intervening blanks from one line to the next. This field is optional. See Advantages of packages on page 385 for more information about this option. 7 OTHER OPTIONS Lets you enter any option that the DSNH CLIST accepts, which gives you greater control over your program. The DSNH options you specify in this field override options specified on other panels. The option list can continue to the next line, but the total length of the list can be no more than 70 bytes. For more information about DSNH options, see Part 3 of DB2 Command Reference.
531
BIND/REBIND/FREE
SSID: DSN
Select one of the following and press ENTER: 1 2 3 4 5 6 7 BIND PLAN REBIND PLAN FREE PLAN BIND PACKAGE REBIND PACKAGE REBIND TRIGGER PACKAGE FREE PACKAGE (Add or replace an application plan) (Rebind existing application plan or plans) (Erase application plan or plans) (Add or replace a package) (Rebind existing package or packages) (Rebind existing package or packages)
This panel lets you select the process you need. 1 BIND PLAN Lets you build an application plan. You must have an application plan to allocate DB2 resources and support SQL requests during run time. If you select this option, the Bind Plan panel displays. For more information, see Bind Plan panel on page 536. 2 REBIND PLAN Lets you rebuild an application plan when changes to it affect the plan but the SQL statements in the program are the same. For example, you should rebind when you change authorizations, create a new index that the plan uses, or use RUNSTATS. If you select this option, the Rebind Plan panel displays. For more information, see Rebind Plan panel on page 542. 3 FREE PLAN Lets you delete plans from DB2. If you select this option, the Free Plan panel displays. For more information, see Free Plan panel on page 545. 4 BIND PACKAGE Lets you build a package. If you select this option, the Bind Package panel displays. For more information, see Bind Package panel on page 533. 5 REBIND PACKAGE Lets you rebuild a package when changes to it affect the package but the SQL statements in the program are the same. For example, you should rebind when you change authorizations, create a new index that the package uses, or use RUNSTATS. If you select this option, the Rebind Package panel displays. For more information, see Rebind Package panel on page 539. 6 REBIND TRIGGER PACKAGE Lets you rebuild a trigger package when you need to change options for the package. When you execute CREATE TRIGGER, DB2 binds a trigger package using a set of default options. You can use REBIND TRIGGER PACKAGE to change those options. For example, you can use REBIND TRIGGER PACKAGE to change the isolation level for the trigger package. If you select this option, the Rebind Trigger Package panel displays. For more information, see Rebind Trigger Package panel on page 540. 7 FREE PACKAGE Lets you delete a specific version of a package, all versions of a package,
532
or whole collections of packages from DB2. If you select this option, the Free Package panel displays. For more information, see Free Package panel on page 544.
The following information explains the functions on the Bind Package panel and how to fill the necessary fields in order to bind your program. For more information, see the BIND PACKAGE command in Part 3 of DB2 Command Reference. 1 LOCATION NAME Lets you specify the system at which to bind the package. You can use from 1 to 16 characters to specify the location name. The location name must be defined in the catalog table SYSIBM.LOCATIONS. The default is the local DBMS. # # # 2 COLLECTION-ID Lets you specify the collection the package is in. You can use from 1 to 18 characters to specify the collection, and the first character must be alphabetic. 3 DBRM: COPY: Lets you specify whether you are creating a new package (DBRM) or making a copy of a package that already exists (COPY). Use: DBRM To create a new package. You must specify values in the LIBRARY, PASSWORD, and MEMBER fields.
533
COPY To copy an existing package. You must specify values in the COLLECTION-ID and PACKAGE-ID fields. (The VERSION field is optional.) 4 MEMBER or COLLECTION-ID MEMBER (for new packages): If you are creating a new package, this option lets you specify the DBRM to bind. You can specify a member name from 1 to 8 characters. The default name depends on the input data set name. v If the input data set is partitioned, the default name is the member name of the input data set specified in the INPUT DATA SET NAME field of the DB2 Program Preparation panel. v If the input data set is sequential, the default name is the second qualifier of this input data set. # # # # COLLECTION-ID (for copying a package): If you are copying a package, this option specifies the collection ID that contains the original package. You can specify a collection ID from 1 to 18 characters, which must be different from the collection ID specified on the PACKAGE ID field. 5 PASSWORD or PACKAGE-ID PASSWORD (for new packages): If you are creating a new package, this lets you enter password for the library you list in the LIBRARY field. You can use this field only if you reached the Bind Package panel directly from the DB2 Primary Option Menu. PACKAGE-ID (for copying packages): If you are copying a package, this option lets you specify the name of the original package. You can enter a package ID from 1 to 8 characters. 6 LIBRARY or VERSION LIBRARY (for new packages): If you are creating a new package, this lets you specify the names of the libraries that contain the DBRMs specified on the MEMBER field for the bind process. Libraries are searched in the order specified and must in the catalog tables. VERSION (for copying packages): If you are copying a package, this option lets you specify the version of the original package. You can specify a version ID from 1 to 64 characters. See Advantages of packages on page 385 for more information about this option. 7 OPTIONS Lets you specify which bind options DB2 uses when you issue BIND PACKAGE with the COPY option. Specify: COMPOSITE (default) to cause DB2 to use any options you specify in the BIND PACKAGE command. For all other options, DB2 uses the options of the copied package. COMMAND to cause DB2 to use the options you specify in the BIND PACKAGE command. For all other options, DB2 uses the following values: For a local copy of a package, DB2 uses the defaults for the local DB2 subsystem. For a remote copy of a package, DB2 uses the defaults for the server on which the package is bound. 8 CHANGE CURRENT DEFAULTS? Lets you specify whether to change the current defaults for binding
534
packages. If you enter YES in this field, you see the Defaults for Bind Package panel as your next step. You can enter your new preferences there; for instructions, see The Defaults for Bind or Rebind Package or Plan panels on page 546. 9 ENABLE/DISABLE CONNECTIONS? Lets you specify whether you want to enable and disable system connections types to use with this package. This is valid only if the LOCATION NAME field names your local DB2 system. Placing YES in this field displays a panel (shown in Figure 167 on page 551) that lets you specify whether various system connections are valid for this application. You can specify connection names to further identify enabled connections within a connection type. A connection name is valid only when you also specify its corresponding connection type. The default enables all connection types. 10 OWNER OF PACKAGE (AUTHID) Lets you specify the primary authorization ID of the owner of the new package. That ID is the name owning the package, and the name associated with all accounting and trace records produced by the package. The owner must have the privileges required to run SQL statements contained in the package. The default is the primary authorization ID of the bind process. 11 QUALIFIER Lets you specify the implicit qualifier for unqualified tables, views, indexes, and aliases. You can specify a qualifier from 1 to 8 characters. The default is the authorization ID of the package owner. 12 ACTION ON PACKAGE Lets you specify whether to replace an existing package or create a new one. Use: REPLACE (default) to replace the package named in the PACKAGE-ID field if it already exists, and add it if it does not. (Use this option if you are changing the package because the SQL statements in the program changed. If only the SQL environment changes but not the SQL statements, you can use REBIND PACKAGE.) ADD to add the package named in the PACKAGE-ID field, only if it does not already exist. 13 INCLUDE PATH? Indicates whether you will supply a list of schema names that DB2 searches when it resolves unqualified distinct type, user-defined function, and stored procedure names in SQL statements. The default is NO. If you specify YES, DB2 displays a panel in which you specify the names of schemas for DB2 to search. 14 REPLACE VERSION Lets you specify whether to replace a specific version of an existing package or create a new one. If the package and the version named in the PACKAGE-ID and VERSION fields already exist, you must specify REPLACE. You can specify a version ID from 1 to 64 characters. The default version ID is that specified in the VERSION field.
535
Enter DBRM data set name(s): 1 MEMBER .......... ===> SAMPLEPG 2 PASSWORD ........ ===> 3 LIBRARY ......... ===> TEMP.DBRM 4 ADDITIONAL DBRMS? ........ ===> NO Enter options as desired: 5 PLAN NAME ................ ===> 6 CHANGE CURRENT DEFAULTS? ===> 7 ENABLE/DISABLE CONNECTIONS?===> 8 INCLUDE PACKAGE LIST?..... ===> 9 OWNER OF PLAN (AUTHID) ... ===> 10 QUALIFIER ................ ===> 11 CACHESIZE ................ ===> 12 ACTION ON PLAN ........... ===> 13 RETAIN EXECUTION AUTHORITY ===> 14 CURRENT SERVER ........... ===> 15 INCLUDE PATH? ............ ===>
SAMPLEPG NO NO NO
0 REPLACE YES
(Required to create a plan) (NO or YES) (NO or YES) (NO or YES) (Leave blank for your primary ID) (For tables, views, and aliases) (Blank, or value 0-4096) (REPLACE or ADD) (YES to retain user list) (Location name) (NO or YES)
The following explains the functions on the Bind Plan panel and how to fill the necessary fields in order to bind your program. For more information, see the BIND PLAN command in Part 3 of DB2 Command Reference. 1 MEMBER Lets you specify the DBRMs to include in the plan. You can specify a name from 1 to 8 characters. You must specify MEMBER or INCLUDE PACKAGE LIST, or both. If you do not specify MEMBER, fields 2, 3, and 4 are ignored. The default member name depends on the input data set. v If the input data set is partitioned, the default name is the member name of the input data set specified in field 1 of the DB2 Program Preparation panel. v If the input data set is sequential, the default name is the second qualifier of this input data set. If you reached this panel directly from the DB2I Primary Option Menu, you must provide values for the MEMBER and LIBRARY fields. If you plan to use more than one DBRM, you can include the library name and member name of each DBRM in the MEMBER and LIBRARY fields, separating entries with commas. You can also specify more DBRMs by using the ADDITIONAL DBRMS? field on this panel.
536
2 PASSWORD Lets you enter passwords for the libraries you list in the LIBRARY field. You can use this field only if you reached the Bind Plan panel directly from the DB2 Primary Option Menu. 3 LIBRARY Lets you specify the name of the library or libraries that contain the DBRMs to use for the bind process. You can specify a name up to 44 characters long. 4 ADDITIONAL DBRMS? Lets you specify more DBRM entries if you need more room. Or, if you reached this panel as part of the program preparation process, you can include more DBRMs by entering YES in this field. A separate panel then displays, where you can enter more DBRM library and member names; see Panels for entering lists of values on page 552. 5 PLAN NAME Lets you name the application plan to create. You can specify a name from 1 to 8 characters, and the first character must be alphabetic. If there are no errors, the bind process prepares the plan and enters its description into the EXPLAIN table. If you reached this panel through the DB2 Program Preparation panel, the default for this field depends on the value you entered in the INPUT DATA SET NAME field of that panel. If you reached this panel directly from the DB2 Primary Option Menu, you must include a plan name if you want to create an application plan. The default name for this field depends on the input data set: v If the input data set is partitioned, the default name is the member name. v If the input data set is sequential, the default name is the second qualifier of the data set name. 6 CHANGE CURRENT DEFAULTS? Lets you specify whether to change the current defaults for binding plans. If you enter YES in this field, you see the Defaults for Bind Plan panel as your next step. You can enter your new preferences there; for instructions, see The Defaults for Bind or Rebind Package or Plan panels on page 546. 7 ENABLE/DISABLE CONNECTIONS? Lets you specify whether you want to enable and disable system connections types to use with this package. This is valid only if the LOCATION NAME field names your local DB2 system. Placing YES in this field displays a panel (shown in Figure 167 on page 551) that lets you specify whether various system connections are valid for this application. You can specify connection names to further identify enabled connections within a connection type. A connection name is valid only when you also specify its corresponding connection type. The default enables all connection types. 8 INCLUDE PACKAGE LIST? Lets you include a list of packages in the plan. If you specify YES, a separate panel displays on which you must enter the package location, collection name, and package name for each package to include in the plan (see Panels for entering lists of values on page 552). This list is optional if you use the MEMBER field.
Chapter 21. Preparing an application program to run
537
# # # #
You can specify a location name from 1 to 16 characters, a collection ID from 1 to 18 characters, and a package ID from 1 to 8 characters. If you specify a location name, which is optional, it must be in the catalog table SYSIBM.LOCATIONS; the default location is the local DBMS. You must specify INCLUDE PACKAGE LIST? or MEMBER, or both, as input to the bind plan. 9 OWNER OF PLAN (AUTHID) Lets you specify the primary authorization ID of the owner of the new plan. That ID is the name owning the plan, and the name associated with all accounting and trace records produced by the plan. The owner must have the privileges required to run SQL statements contained in the plan.
# # # #
10 QUALIFIER Lets you specify the implicit qualifier for unqualified tables, views and aliases. You can specify a name from 1 to 8 characters, which must conform to the rules for SQL identifiers. If you leave this field blank, the default qualifier is the authorization ID of the plan owner. 11 CACHESIZE Lets you specify the size (in bytes) of the authorization cache. Valid values are in the range 0 to 4096. Values that are not multiples of 256 round up to the next highest multiple of 256. A value of 0 indicates that DB2 does not use an authorization cache. The default is 1024. Each concurrent user of a plan requires 8 bytes of storage, with an additional 32 bytes for overhead. See Determining the optimal authorization cache size on page 504 for more information about this option. 12 ACTION ON PLAN Lets you specify whether this is a new or changed application plan. Use: REPLACE (default) to replace the plan named in the PLAN NAME field if it already exists, and add the plan if it does not exist. ADD to add the plan named in the PLAN NAME field, only if it does not already exist. 13 RETAIN EXECUTION AUTHORITY Lets you choose whether or not those users with the authority to bind or run the existing plan are to keep that authority over the changed plan. This applies only when you are replacing an existing plan. If the plan ownership changes and you specify YES, the new owner grants BIND and EXECUTE authority to the previous plan owner. If the plan ownership changes and you do not specify YES, then everyone but the new plan owner loses EXECUTE authority (but not BIND authority), and the new plan owner grants BIND authority to the previous plan owner. 14 CURRENT SERVER Lets you specify the initial server to receive and process SQL statements in this plan. You can specify a name from 1 to 16 characters, which you must previously define in the catalog table SYSIBM.LOCATIONS. If you specify a remote server, DB2 connects to that server when the first SQL statement executes. The default is the name of the local DB2 subsystem. For more information about this option, see the bind option CURRENTSERVER in Part 3 of DB2 Command Reference.
538
15 INCLUDE PATH? Indicates whether you will supply a list of schema names that DB2 searches when it resolves unqualified distinct type, user-defined function, and stored procedure names in SQL statements. The default is NO. If you specify YES, DB2 displays a panel in which you specify the names of schemas for DB2 to search. When you finish making changes to this panel, press ENTER to go to the second of the program preparation panels, Program Prep: Compile, Link, and Run.
===>
ADDITIONAL PACKAGES? ...... ===> ===> ===> ===> ===> ===> ===>
Enter options as desired ...... 7 CHANGE CURRENT DEFAULTS?... 8 OWNER OF PACKAGE (AUTHID).. 9 QUALIFIER ................. 10 ENABLE/DISABLE CONNECTIONS? 11 INCLUDE PATH? .............
This panel lets you choose options for rebinding a package. For information about the rebind options that these fields represent, see the REBIND PACKAGE command in Part 3 of DB2 Command Reference. 1 Rebind all local packages Lets you rebind all packages on the local DBMS. To do so, place an asterisk (*) in this field; otherwise, leave it blank. 2 LOCATION NAME Lets you specify where to bind the package. If you specify a location name, you should use from 1 to 16 characters, and you must have defined it in the catalog table SYSIBM.LOCATIONS. # # # # 3 COLLECTION-ID Lets you specify the collection of the package to rebind. You must specify a collection ID from 1 to 8 characters, or an asterisk (*) to rebind all collections in the local DB2 system. You cannot use the asterisk to rebind a remote collection. 4 PACKAGE-ID Lets you specify the name of the package to rebind. You must specify a package ID from 1 to 8 characters, or an asterisk (*) to rebind all packages in the specified collections in the local DB2 system. You cannot use the asterisk to rebind a remote package.
539
5 VERSION-ID Lets you specify the version of the package to rebind. You must specify a version ID from 1 to 64 characters, or an asterisk (*) to rebind all versions in the specified collections and packages in the local DB2 system. You cannot use the asterisk to rebind a remote version. 6 ADDITIONAL PACKAGES? Lets you indicate whether to name more packages to rebind. Use YES to specify more packages on an additional panel, described on Panels for entering lists of values on page 552. The default is NO. 7 CHANGE CURRENT DEFAULTS? Lets you indicate whether to change the binding defaults. Use: NO (default) to retain the binding defaults of the previous package. YES to change the binding defaults from the previous package. For information about the defaults for binding packages, see The Defaults for Bind or Rebind Package or Plan panels on page 546. 8 OWNER OF PACKAGE (AUTHID) Lets you change the authorization ID for the package owner. The owner must have the required privileges to execute the SQL statements in the package. The default is the existing package owner. 9 QUALIFIER Lets you specify the implicit qualifier for all unqualified table names, views, indexes, and aliases in the package. You can specify a qualifier name from 1 to 8 characters, which must conform to the rules for the SQL short identifier. The default is the existing qualifier name. 10 ENABLE/DISABLE CONNECTIONS? Lets you specify whether you want to enable and disable system connections types to use with this package. This is valid only if the LOCATION NAME field names your local DB2 system. Placing YES in this field displays a panel (shown in Figure 167 on page 551) that lets you specify whether various system connections are valid for this application. The default is the values used for the previous package. 11 INCLUDE PATH? Indicates which one of the following actions you want to perform: v Request that DB2 uses the same schema names as when the package was bound for resolving unqualified distinct type, user-defined function, and stored procedure names in SQL statements. Choose SAME to perform this action. This is the default. v Supply a list of schema names that DB2 searches when it resolves unqualified distinct type, user-defined function, and stored procedure names in SQL statements. Choose YES to perform this action. v Request that DB2 resets the SQL path to SYSIBM, SYSFUN, SYSPROC, and the package owner. Choose DEFAULT to perform this action. If you specify YES, DB2 displays a panel in which you specify the names of schemas for DB2 to search.
540
SSID: DSN
Enter trigger package name(s) to be rebound: LOCATION NAME ............. ===> COLLECTION-ID (SCHEMA NAME) ===> PACKAGE-ID (TRIGGER NAME).. ===> ===> ===> ===> ===> ===> ===>
Enter options as desired ...... 5 ISOLATION LEVEL ........... 6 RESOURCE RELEASE TIME ..... 7 EXPLAIN PATH SELECTION .... 8 DATA CURRENCY ............. 9 IMMEDIATE WRITE OPTION ....
RR, RS, CS, UR, or NC) DEALLOCATE, or COMMIT) NO, or YES) NO, or YES) NO, YES)
This panel lets you choose options for rebinding a trigger package. For information about the rebind options that these fields represent, see the REBIND TRIGGER PACKAGE command in Part 3 of DB2 Command Reference. 1 Rebind all trigger packages Lets you rebind all packages on the local DBMS. To do so, place an asterisk (*) in this field; otherwise, leave it blank. 2 LOCATION NAME Lets you specify where to bind the trigger package. If you specify a location name, you should use from 1 to 16 characters, and you must have defined it in the catalog table SYSIBM.LOCATIONS. # # # # 3 COLLECTION-ID (SCHEMA NAME) Lets you specify the collection of the trigger package to rebind. You must specify a collection ID from 1 to 8 characters, or an asterisk (*) to rebind all collections in the local DB2 system. You cannot use the asterisk to rebind a remote collection. 4 PACKAGE-ID Lets you specify the name of the trigger package to rebind. You must specify a package ID from 1 to 8 characters, or an asterisk (*) to rebind all trigger packages in the specified collections in the local DB2 system. You cannot use the asterisk to rebind a remote trigger package. 5 ISOLATION LEVEL Lets you specify how far to isolate your application from the effects of other running applications. The default is the value used for the old trigger package. Use RR, RS, CS, UR, or NC. For a description of the effects of those values, see The ISOLATION option on page 412. 6 RESOURCE RELEASE TIME Lets you specify COMMIT or DEALLOCATE to tell when to release locks on resources. The default is that used for the old trigger package. For a description of the effects of those values, see The ACQUIRE and RELEASE options on page 408. 7 EXPLAIN PATH SELECTION Lets you specify YES or NO for whether to obtain EXPLAIN information about how SQL statements in the package execute. The default is the value used for the old trigger package.
541
The bind process inserts information into the table owner.PLAN_TABLE, where owner is the authorization ID of the plan or package owner. If you defined owner.DSN_STATEMNT_TABLE, DB2 also inserts information about the cost of statement execution into that table. If you specify YES in this field and BIND in the VALIDATION TIME field, and if you do not correctly define PLAN_TABLE, the bind fails. For information about EXPLAIN and creating a PLAN_TABLE, see Obtaining PLAN_TABLE information from EXPLAIN on page 790. 8 DATA CURRENCY Lets you specify YES or NO for whether you need data currency for ambiguous cursors opened at remote locations. The default is the value used for the old trigger package. Data is current if the data within the host structure is identical to the data within the base table. Data is always current for local processing. For more information about data currency, see Maintaining data currency by using cursors on page 467. 9 IMMEDIATE WRITE OPTION Specifies when DB2 writes the changes for updated group buffer pool-dependent pages. This field applies only to a data sharing environment. The values that you can specify are: SAME Choose the value of IMMEDIATE WRITE that you specified when you bound the trigger package. SAME is the default. | | | | YES NO Write the changes at or before phase 1 of the commit process. If the transaction is rolled back later, write the additional changes that are caused by the rollback at the end of the abort process. PH1 is equivalent to NO. Write the changes immediately after group buffer pool-dependent pages are updated.
For more information about this option, see the bind option IMMEDWRITE in Part 3 of DB2 Command Reference.
Enter plan name(s) to be rebound: 1 PLAN NAME ................. ===> 2 ADDITIONAL PLANS? ......... ===> NO Enter options as desired: 3 CHANGE CURRENT DEFAULTS?... 4 OWNER OF PLAN (AUTHID)..... 5 QUALIFIER ................. 6 CACHESIZE ................. 7 ENABLE/DISABLE CONNECTIONS? 8 INCLUDE PACKAGE LIST?...... 9 CURRENT SERVER ............ 10 INCLUDE PATH? .............
YES) new OWNER) new QUALIFIER) or value 0-4096) YES) NO, or YES) (Location name) (SAME, DEFAULT, or YES)
542
This panel lets you specify options for rebinding your plan. For information about the rebind options that these fields represent, see the REBIND PLAN command in Part 3 of DB2 Command Reference. 1 PLAN NAME Lets you name the application plan to rebind. You can specify a name from 1 to 8 characters, and the first character must be alphabetic. Do not begin the name with DSN, because it could create name conflicts with DB2. If there are no errors, the bind process prepares the plan and enters its description into the EXPLAIN table. If you leave this field blank, the bind process occurs but produces no plan. 2 ADDITIONAL PLANS? Lets you indicate whether to name more plans to rebind. Use YES to specify more plans on an additional panel, described at Panels for entering lists of values on page 552. The default is NO. 3 CHANGE CURRENT DEFAULTS? Lets you indicate whether to change the binding defaults. Use: NO (default) to retain the binding defaults of the previous plan. YES to change the binding defaults from the previous plan. For information about the defaults for binding plans, see The Defaults for Bind or Rebind Package or Plan panels on page 546. 4 OWNER OF PLAN (AUTHID) Lets you change the authorization ID for the plan owner. The owner must have the required privileges to execute the SQL statements in the plan. The default is the existing plan owner. # # # # 5 QUALIFIER Lets you specify the implicit qualifier for all unqualified table names, views, indexes, and aliases in the plan. You can specify a qualifier name from 1 to 8 characters, which must conform to the rules for the SQL identifier. The default is the authorization ID. 6 CACHESIZE Lets you specify the size (in bytes) of the authorization cache. Valid values are in the range 0 to 4096. Values that are not multiples of 256 round up to the next highest multiple of 256. A value of 0 indicates that DB2 does not use an authorization cache. The default is the cache size specified for the previous plan. Each concurrent user of a plan requires 8 bytes of storage, with an additional 32 bytes for overhead. See Determining the optimal authorization cache size on page 504 for more information about this option. 7 ENABLE/DISABLE CONNECTIONS? Lets you specify whether you want to enable and disable system connections types to use with this plan. This is valid only for rebinding on your local DB2 system. Placing YES in this field displays a panel (shown in Figure 167 on page 551) that lets you specify whether various system connections are valid for this application. The default is the values used for the previous plan. 8 INCLUDE PACKAGE LIST? Lets you include a list of collections and packages in the plan. If you specify YES, a separate panel displays on which you must enter the
Chapter 21. Preparing an application program to run
543
package location, collection name, and package name for each package to include in the plan (see Panels for entering lists of values on page 552). This field can either add a package list to a plan that did not have one, or replace an existing package list. # # # # # You can specify a location name from 1 to 16 characters, a collection ID from 1 to 18 characters, and a package ID from 1 to 8 characters. Separate two or more package list parameters with a comma. If you specify a location name, it must be in the catalog table SYSIBM.LOCATIONS. The default location is the package list used for the previous plan. 9 CURRENT SERVER Lets you specify the initial server to receive and process SQL statements in this plan. You can specify a name from 1 to 16 characters, which you must previously define in the catalog table SYSIBM.LOCATIONS. If you specify a remote server, DB2 connects to that server when the first SQL statement executes. The default is the name of the local DB2 subsystem. For more information about this option, see the bind option CURRENTSERVER in Part 3 of DB2 Command Reference. 10 INCLUDE PATH? Indicates which one of the following actions you want to perform: v Request that DB2 uses the same schema names as when the plan was bound for resolving unqualified distinct type, user-defined function, and stored procedure names in SQL statements. Choose SAME to perform this action. This is the default. v Supply a list of schema names that DB2 searches when it resolves unqualified distinct type, user-defined function, and stored procedure names in SQL statements. Choose YES to perform this action. v Request that DB2 resets the SQL path to SYSIBM, SYSFUN, SYSPROC, and the plan owner. Choose DEFAULT to perform this action. If you specify YES, DB2 displays a panel in which you specify the names of schemas for DB2 to search.
544
1 Free ALL packages Lets you free (erase) all packages for which you have authorization or to which you have BINDAGENT authority. To do so, place an asterisk (*) in this field; otherwise, leave it blank. 2 LOCATION NAME Lets you specify the location name of the DBMS to free the package. You can specify a name from 1 to 16 characters. # # # # 3 COLLECTION-ID Lets you specify the collection from which you want to delete packages for which you own or have BINDAGENT privileges. You can specify a name from 1 to 18 characters, or an asterisk (*) to free all collections in the local DB2 system. You cannot use the asterisk to free a remote collection. 4 PACKAGE-ID Lets you specify the name of the package to free. You can specify a name from 1 to 8 characters, or an asterisk (*) to free all packages in the specified collections in the local DB2 system. You cannot use the asterisk to free a remote package. The name you specify must be in the DB2 catalog tables. 5 VERSION-ID Lets you specify the version of the package to free. You can specify an identifier from 1 to 64 characters, or an asterisk (*) to free all versions of the specified collections and packages in the local DB2 system. You cannot use the asterisk to free a remote version. 6 ADDITIONAL PACKAGES? Lets you indicate whether to name more packages to free. Use YES to specify more packages on an additional panel, described in Panels for entering lists of values on page 552. The default is NO.
Enter plan name(s) to be freed: 1 PLAN NAME ............ ===> 2 ADDITIONAL PLANS? .... ===>
This panel lets you specify options for freeing plans. 1 PLAN NAME Lets you name the application plan to delete from DB2. Use an asterisk to free all plans for which you have BIND authority. You can specify a name from 1 to 8 characters, and the first character must be alphabetic. If there are errors, the free process terminates for that plan and continues with the next plan. 2 ADDITIONAL PLANS? Lets you indicate whether to name more plans to free. Use YES to specify more plans on an additional panel, described in Panels for entering lists of values on page 552. The default is NO.
545
Change default options as necessary: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ISOLATION LEVEL ......... VALIDATION TIME ......... RESOURCE RELEASE TIME ... EXPLAIN PATH SELECTION .. DATA CURRENCY ........... PARALLEL DEGREE ......... SQLERROR PROCESSING ..... REOPTIMIZE FOR INPUT VARS DEFER PREPARE ........... KEEP DYN SQL PAST COMMIT DBPROTOCOL .............. APPLICATION ENCODING ... ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> (RR, RS, CS, UR, or NC) (RUN or BIND) (COMMIT or DEALLOCATE) (NO or YES) (NO or YES) (1 or ANY) (NOPACKAGE or CONTINUE) (ALWAYS, NONE, or ONCE) (NO OR YES) (NO or YES) (DRDA OR PRIVATE) (Blank, ASCII, EBCDIC, UNICODE, or ccsid) (Blank or hint-id) (YES, NO) (RUN, BIND, DEFINE, or INVOKE)
| | |
OPTIMIZATION HINT ...... ===> IMMEDIATE WRITE ......... ===> DYNAMIC RULES ........... ===>
This panel lets you change your defaults for BIND PACKAGE options. With a few minor exceptions, the options on this panel are the same as the options for the defaults for rebinding a package. However, the defaults for REBIND PACKAGE are different from those shown in the preceding figure, and you can specify SAME in any field to specify the values used the last time the package was bound. For rebinding, the default value for all fields is SAME. On this panel, enter new defaults for binding your plan.
DSNEBP10 COMMAND ===> DEFAULTS FOR BIND PLAN SSID: DSN
Change default options as necessary: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ISOLATION LEVEL ......... VALIDATION TIME ......... RESOURCE RELEASE TIME ... EXPLAIN PATH SELECTION .. DATA CURRENCY ........... PARALLEL DEGREE ......... RESOURCE ACQUISITION TIME REOPTIMIZE FOR INPUT VARS DEFER PREPARE ........... KEEP DYN SQL PAST COMMIT. DBPROTOCOL .............. APPLICATION ENCODING ... OPTIMIZATION HINT ...... IMMEDIATE WRITE ......... DYNAMIC RULES ........... SQLRULES................. DISCONNECT .............. ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> RR RUN COMMIT NO NO 1 USE NONE NO NO (RR, RS, CS, or UR) (RUN or BIND) (COMMIT or DEALLOCATE) (NO or YES) (NO or YES) (1 or ANY) (USE or ALLOCATE) (ALWAYS, NONE, ONCE) (NO or YES) (NO or YES) (Blank, DRDA, OR PRIVATE) (Blank, ASCII, EBCDIC, UNICODE, or ccsid) (Blank or hint-id) (YES, NO) (RUN or BIND) (DB2 or STD) (EXPLICIT, AUTOMATIC, or CONDITIONAL)
This panel lets you change your defaults for options of BIND PLAN. The options on this panel are mostly the same as the options for the defaults for rebinding a
546
package. However, for REBIND PLAN defaults, you can specify SAME in any field to specify the values used the last time the plan was bound. For rebinding, the default value for all fields is SAME. Explanations of panel fields: The fields in panels Defaults for Bind Package and Defaults for Bind Plan are: 1 ISOLATION LEVEL Lets you specify how far to isolate your application from the effects of other running applications. The default is the value used for the old plan or package if you are replacing an existing one. Use RR, RS, CS, or UR. For a description of the effects of those values, see The ISOLATION option on page 412. 2 VALIDATION TIME Lets you specify RUN or BIND to tell whether to check authorization at run time or at bind time. The default is that used for the old plan or package, if you are replacing it. For more information about this option, see the bind option VALIDATE in Part 3 of DB2 Command Reference. 3 RESOURCE RELEASE TIME Lets you specify COMMIT or DEALLOCATE to tell when to release locks on resources. The default is that used for the old plan or package, if you are replacing it. For a description of the effects of those values, see The ACQUIRE and RELEASE options on page 408. 4 EXPLAIN PATH SELECTION Lets you specify YES or NO for whether to obtain EXPLAIN information about how SQL statements in the package execute. The default is NO. The bind process inserts information into the table owner.PLAN_TABLE, where owner is the authorization ID of the plan or package owner. If you defined owner.DSN_STATEMNT_TABLE, DB2 also inserts information about the cost of statement execution into that table. If you specify YES in this field and BIND in the VALIDATION TIME field, and if you do not correctly define PLAN_TABLE, the bind fails. For information about EXPLAIN and creating a PLAN_TABLE, see Obtaining PLAN_TABLE information from EXPLAIN on page 790. 5 DATA CURRENCY Lets you specify YES or NO for whether you need data currency for ambiguous cursors opened at remote locations. Data is current if the data within the host structure is identical to the data within the base table. Data is always current for local processing. For more information about data currency, see Maintaining data currency by using cursors on page 467. 6 PARALLEL DEGREE Lets you specify ANY to run queries using parallel processing (when possible) or 1 to request that DB2 not execute queries in parallel. See Chapter 28, Parallel operations and query performance, on page 847 for more information about this option. 8 REOPTIMIZE FOR INPUT VARS Specifies whether DB2 determines access paths at bind time and again at execution time for statements that contain: v Input host variables v Parameter markers
Chapter 21. Preparing an application program to run
547
v Special registers If you specify ALWAYS, DB2 determines the access paths again at execution time. When you specify ALWAYS for this option, you must also specify YES for DEFER PREPARE, or you will receive a bind error. If you specify ONCE, DB2 determines the access path at the first execution or open time. It saves and continues to use that access path for that specific statement until the statement is invalidated or removed from the dynamic statement cache or until the statement needs to be prepared again. The default, NONE, specifies that DB2 does not determine the access path at bind time using input host variables, parameter markers, or special registers. 9 DEFER PREPARE Lets you defer preparation of dynamic SQL statements until DB2 encounters the first OPEN, DESCRIBE, or EXECUTE statement that refers to those statements. Specify YES to defer preparation of the statement. For information about using this option, see Using bind options to improve performance for distributed applications on page 456. 10 KEEP DYN SQL PAST COMMIT Specifies whether DB2 keeps dynamic SQL statements after commit points. YES causes DB2 to keep dynamic SQL statements after commit points. An application can execute a PREPARE statement for a dynamic SQL statement once and execute that statement after later commit points without executing PREPARE again. For more information, see Performance of static and dynamic SQL on page 595. 11 DBPROTOCOL Specifies whether DB2 uses DRDA protocol or DB2 private protocol to execute statements that contain 3-part names. For more information, see Chapter 20, Planning to access distributed data, on page 441. 12 APPLICATION ENCODING Specifies the application encoding scheme to be used: blank Indicates that all host variables in static SQL statements are encoded using the encoding scheme in the DEF ENCODING SCHEME field of installation panel DSNTIPF. Indicates that the CCSIDs for all host variables in static SQL statements are determined by the values in the ASCII CODED CHAR SET and MIXED DATA fields of installation panel DSNTIPF. Indicates that the CCSIDs for all host variables in static SQL statements are determined by the values in the EBCDIC CODED CHAR SET and MIXED DATA fields of installation panel DSNTIPF. Indicates that the CCSIDs of all host variables in static SQL statements are determined by the value in the UNICODE CCSID field of installation panel DSNTIPF. Specifies a CCSID that determines the set of CCSIDs that are used for all host variables in static SQL statements. If you specify ccsid, this value should be a mixed CCSID. For Unicode, the mixed CCSID is a UTF-8 CCSID. DB2 derives the SBCS and DBCS CCSIDs.
| | | | | | | |
ASCII
EBCDIC
UNICODE
ccsid
548
13 OPTIMIZATION HINT Specifies whether you want to use optimization hints to determine access paths. Specify hint-id to indicate that you want DB2 to use the optimization hints in owner.PLAN_TABLE, where owner is the authorization ID of the plan or package owner. hint-id is a delimited string of up to 8 characters that DB2 compares to the value of OPTHINT in owner.PLAN_TABLE to determine the rows to use for optimization hints. If you specify a nonblank value for hint-id, DB2 uses optimization hints only if the value of field OPTIMIZATION HINTS on installation panel DSNTIP8 is YES. Blank means that you do not want DB2 to use optimization hints. This is the default. For more information, see Part 5 (Volume 2) of DB2 Administration Guide. 14 IMMEDIATE WRITE Specifies when DB2 writes the changes for updated group buffer pool-dependent pages. This field applies only to a data sharing environment. The values that you can specify are:
| | | | |
NO
Write the changes at or before phase 1 of the commit process. If the transaction is rolled back later, write the additional changes that are caused by the rollback at the end of the abort process. NO is the default. PH1 is equivalent to NO.
YES
Write the changes immediately after group buffer pool-dependent pages are updated.
For more information about this option, see the bind option IMMEDWRITE in Part 3 of DB2 Command Reference. 15 DYNAMIC RULES For plans, lets you specify whether run-time (RUN) or bind-time (BIND) rules apply to dynamic SQL statements at run time. For packages, lets you specify whether run-time (RUN) or bind-time (BIND) rules apply to dynamic SQL statements at run time. For packages that run under an active user-defined function or stored procedure environment, the INVOKEBIND, INVOKERUN, DEFINEBIND, and DEFINERUN options indicate who must have authority to execute dynamic SQL statements in the package. For packages, the default rules for a package on the local server are the same as the rules for the plan to which the package appends at run time. For a package on the remote server, the default is RUN. If you specify rules for a package that are different from the rules for the plan, the SQL statements for the package use the rules you specify for that package. If a package that is bound with DEFINEBIND or INVOKEBIND is not executing under an active stored procedure or user-defined function environment, SQL statements for that package use BIND rules. If a package that is bound with DEFINERUN or INVOKERUN is not executing under an active stored procedure or user-defined function environment, SQL statements for that package use RUN rules. For more information, see Using DYNAMICRULES to specify behavior of dynamic SQL statements on page 502. For packages:
549
7 SQLERROR PROCESSING Lets you specify CONTINUE to continue to create a package after finding SQL errors, or NOPACKAGE to avoid creating a package after finding SQL errors. For plans: 7 RESOURCE ACQUISITION TIME Lets you specify when to acquire locks on resources. Use: USE (default) to open table spaces and acquire locks only when the program bound to the plan first uses them. ALLOCATE to open all table spaces and acquire all locks when you allocate the plan. This value has no effect on dynamic SQL. For a description of the effects of those values, see The ACQUIRE and RELEASE options on page 408. 16 SQLRULES Lets you specify whether a CONNECT (Type 2) statement executes according to DB2 rules (DB2) or the SQL standard (STD). For information, see Specifying the SQL rules on page 505. 17 DISCONNECT Lets you specify which remote connections end during a commit or a rollback. Regardless of what you specify, all connections in the released-pending state end during commit. Use: EXPLICIT to end connections in the release-pending state only at COMMIT or ROLLBACK AUTOMATIC to end all remote connections CONDITIONAL to end remote connections that have no open cursors WITH HOLD associated with them. See the DISCONNECT option of the BIND PLAN subcommand in Part 3 of DB2 Command Reference for more information about these values.
550
DSNEBP13 SYSTEM CONNECTION TYPES FOR BIND ... COMMAND ===> Select system connection types to be Enabled/Disabled: 1 or 2 ENABLE ALL CONNECTION TYPES? ===>
SSID: DSN
ENABLE/DISABLE SPECIFIC CONNECTION TYPES ===> BATCH ....... DB2CALL ..... RRSAF ....... CICS ........ IMS ......... DLIBATCH .... IMSBMP ...... IMSMPP ...... REMOTE ...... ===> ===> ===> ===> ===> ===> ===> ===> ===> (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N)
To enable or disable connection types (that is, allow or prevent the connection from running the package or plan), enter the following information. 1 ENABLE ALL CONNECTION TYPES? Lets you enter an asterisk (*) to enable all connections. After that entry, you can ignore the rest of the panel. 2 ENABLE/DISABLE SPECIFIC CONNECTION TYPES Lets you specify a list of types to enable or disable; you cannot enable some types and disable others in the same operation. If you list types to enable, enter E; that disables all other connection types. If you list types to disable, enter D; that enables all other connection types. For more information about this option, see the bind options ENABLE and DISABLE in Part 3 of DB2 Command Reference. For each connection type that follows, enter Y (yes) if it is on your list, N (no) if it is not. The connection types are: v BATCH for a TSO connection v DB2CALL for a CAF connection v RRSAF for an RRSAF connection v CICS for a CICS connection v IMS for all IMS connections: DLIBATCH, IMSBMP, and IMSMPP v DLIBATCH for a DL/I Batch Support Facility connection v IMSBMP for an IMS connection to a BMP region v IMSMPP for an IMS connection to an MPP or IFP region v REMOTE for remote location names and LU names For each connection type that has a second arrow, under SPECIFY CONNECTION NAMES?, enter Y if you want to list specific connection names of that type. Leave N (the default) if you do not. If you use Y in any of those fields, you see another panel on which you can enter the connection names. For more information, see Panels for entering lists of values on page 552. If you use the DISPLAY command under TSO on this panel, you can determine what you have currently defined as enabled or disabled in your ISPF DSNSPFT library (member DSNCONNS). The information does not reflect the current state of the DB2 Catalog. If you type DISPLAY ENABLED on the command line, you get the connection names that are currently enabled for your TSO connection types. For example:
Chapter 21. Preparing an application program to run
551
Display OF ALL CONNECTION CICS1 CICS2 CICS3 CICS4 DLI1 DLI2 DLI3 DLI4 DLI5
connection name(s) to be SUBSYSTEM ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED
ENABLED
value value
... ...
For the syntax of specifying names on a list panel, see Part 3 of DB2 Command Reference for the type of name you need to specify. All of the list panels let you enter limited commands in two places: v On the system command line, prefixed by ====> v In a special command area, identified by On the system command line, you can use: END Saves all entered variables, exits the table, and continues to process.
CANCEL Discards all entered variables, terminates processing, and returns to the previous panel. SAVE Saves all entered variables and remains in the table. In the special command area, you can use: Inn Dnn Rnn Insert nn lines after this one. Delete this and the following lines for nn lines. Repeat this line nn number of times.
552
The default for nn is 1. When you finish with a list panel, specify END to same the current panel values and continue processing.
Figure 169. The Program Preparation: Compile, Link, and Run panel
1,2 INCLUDE LIBRARY Lets you specify up to two libraries containing members for the compiler to include. The members can also be output from DCLGEN. You can leave these fields blank if you wish. There is no default. 3 OPTIONS Lets you specify compiler, assembler, or PL/I macro processor options. You can also enter a list of compiler or assembler options by separating entries with commas, blanks, or both. You can leave these fields blank if you wish. There is no default. 4,5,6 INCLUDE LIBRARY Lets you enter the names of up to three libraries containing members for the linkage editor to include. You can leave these fields blank if you wish. There is no default. 7 LOAD LIBRARY Lets you specify the name of the library to hold the load module. The default value is RUNLIB.LOAD. If the load library specified is a PDS, and the input data set is a PDS, the member name specified in INPUT DATA SET NAME field of the Program Preparation panel is the load module name. If the input data set is sequential, the second qualifier of the input data set is the load module name. You must fill in this field if you request LINK or RUN on the Program Preparation panel.
553
8 PRELINK OPTIONS Lets you enter a list of prelinker options. Separate items in the list with commas, blanks, or both. You can leave this field blank if you wish. There is no default. The prelink utility applies only to programs using C, C++, and Enterprise COBOL for z/OS. See z/OS Language Environment Programming Guide for more information about prelinker options. 9 LINK OPTIONS Lets you enter a list of link-edit options. Separate items in the list with commas, blanks, or both. To prepare a program that uses 31-bit addressing and runs above the 16-megabyte line, specify the following link-edit options: AMODE=31, RMODE=ANY. 10 PARAMETERS Lets you specify a list of parameters you want to pass either to your host language run-time processor, or to your application. Separate items in the list with commas, blanks, or both. You can leave this field blank. If you are preparing an IMS or CICS program, you must leave this field blank; you cannot use DB2I to run IMS and CICS programs. Use a slash (/) to separate the options for your run-time processor from those for your program. v For PL/I and Fortran, run-time processor parameters must appear on the left of the slash, and the application parameters must appear on the right.
run-time processor parameters / application parameters
v For COBOL, reverse this order. Run-time processor parameters must appear on the right of the slash, and the application parameters must appear on the left. v For assembler and C, there is no supported run-time environment, and you need not use a slash to pass parameters to the application program. 11 SYSIN DATA SET Lets you specify the name of a SYSIN (or in Fortran, FT05F001) data set for your application program, if it needs one. If you do not enclose the data set name in apostrophes, a standard TSO prefix (user ID) and suffix is added to it. The default for this field is TERM. If you are preparing an IMS or CICS program, you must leave this field blank; you cannot use DB2I to run IMS and CICS programs. 12 SYSPRINT DS Lets you specify the names of a SYSPRINT (or in Fortran, FT06F001) data set for your application program, if it needs one. If you do not enclose the data set name in apostrophes, a standard TSO prefix (user ID) and suffix is added to it. The default for this field is TERM. If you are preparing an IMS or CICS program, you must leave this field blank; you cannot use DB2I to run IMS and CICS programs. Your application could need other data sets besides SYSIN and SYSPRINT. If so, remember to catalog and allocate them before you run your program.
554
When you press ENTER after entering values in this panel, DB2 compiles and link-edits the application. If you specified in the DB2 Program Preparation panel that you want to run the application, DB2 also runs the application.
Enter the name of the program you want to run: 1 DATA SET NAME ===> 2 PASSWORD..... ===> (Required if data set is password protected) Enter the following as desired: 3 PARAMETERS .. ===> 4 PLAN NAME ... ===> (Required if different from program name) 5 WHERE TO RUN ===> (FOREGROUND, BACKGROUND, or EDITJCL)
This panel lets you run existing application programs. 1 DATA SET NAME Lets you specify the name of the partitioned data set that contains the load module. If the module is in a data set that the operating system can find, you can specify the member name only. There is no default. If you do not enclose the name in apostrophes, a standard TSO prefix (user ID) and suffix (.LOAD) is added. 2 PASSWORD Lets you specify the data set password if needed. The RUN processor does not check whether you need a password. If you do not enter a required password, your program does not run. 3 PARAMETERS Lets you specify a list of parameters you want to pass either to your host language run-time processor, or to your application. You should separate items in the list with commas, blanks, or both. You can leave this field blank. Use a slash (/) to separate the options for your run-time processor from those for your program. v For PL/I and Fortran, run-time processor parameters must appear on the left of the slash, and the application parameters must appear on the right.
run-time processor parameters / application parameters
v For COBOL, reverse this order. Run-time processor parameters must appear on the right of the slash, and the application parameters must appear on the left. v For assembler and C, there is no supported run-time environment, and you need not use the slash to pass parameters to the application program.
Chapter 21. Preparing an application program to run
555
4 PLAN NAME Lets you specify the name of the plan to which the program is bound. The default is the member name of the program. 5 WHERE TO RUN Lets you indicate whether to run in the foreground or background. You can also specify EDITJCL, in which case you are able to edit the job control statement before you run the program. Use: FOREGROUND to immediately run the program in the foreground with the specified values. BACKGROUND to create and immediately submit to run a file containing a DSNH CLIST using the JOB control statement from either DB2I Defaults Panel 2 or your sites SUBMIT exit. The program runs in the background. EDITJCL to create and open a file containing a DSNH CLIST in edit mode. You can then submit the CLIST or save it. The program runs in the background. Running Command Processors To run a command processor (CP), use the following commands from the TSO ready prompt or as a TSO TMP:
DSN SYSTEM (DB2-subsystem-name) RUN CP PLAN (plan-name)
The RUN subcommand prompts you for more input. The end the DSN processor, use the END command.
556
CICS Before you run an application, ensure that the corresponding entries in the SNT and RACF control areas authorize your application to run. The system administrator is responsible for these functions; see Part 3 (Volume 1) of DB2 Administration Guide for more information on the functions. In addition, ensure that the program and its transaction code are defined in the CICS CSD.
557
1. List the data your application accesses and describe how it accesses each data item. For example, suppose you are testing an application that accesses the DSN8810.EMP, DSN8810.DEPT, and DSN8810.PROJ tables. You might record the information about the data as shown in Table 74.
Table 74. Description of the applications data Table or view name DSN8810.EMP Insert rows? No Delete rows? No Column name EMPNO LASTNAME WORKDEPT PHONENO JOB DSN8810.DEPT No No DEPTNO MGRNO DSN8810.PROJ Yes Yes PROJNO DEPTNO RESPEMP PRSTAFF PRSTDATE PRENDATE Data type CHAR(6) VARCHAR(15) CHAR(3) CHAR(4) DECIMAL(3) CHAR(3) CHAR (6) CHAR(6) CHAR(3) CHAR(6) DECIMAL(5,2) DECIMAL(6) DECIMAL(6) Update access? No No Yes Yes Yes No No No Yes Yes Yes Yes Yes
2. Determine the test tables and views you need to test your application. Create a test table on your list when either: v The application modifies data in the table v You need to create a view based on a test table because your application modifies the views data. To continue the example, create these test tables: v TEST.EMP, with the following format:
EMPNO . . . LASTNAME . . . WORKDEPT . . . PHONENO . . . JOB . . .
v TEST.PROJ, with the same columns and format as DSN8810.PROJ, because the application inserts rows into the DSN8810.PROJ table. To support the example, create a test view of the DSN8810.DEPT table. v TEST.DEPT view, with the following format:
DEPTNO . . . MGRNO . . .
Because the application does not change any data in the DSN8810.DEPT table, you can base the view on the table itself (rather than on a test table). However, a safer approach is to have a complete set of test tables and to test the program thoroughly using only test data.
558
Obtaining authorization
Before you can create a table, you need to be authorized to create tables and to use the table space in which the table is to reside. You must also have authority to bind and run programs you want to test. Your DBA can grant you the authorization needed to create and access tables and to bind and run programs. If you intend to use existing tables and views (either directly or as the basis for a view), you need privileges to access those tables and views. Your DBA can grant those privileges. To create a view, you must have authorization for each table and view on which you base the view. You then have the same privileges over the view that you have over the tables and views on which you based the view. Before trying the examples, have your DBA grant you the privileges to create new tables and views and to access existing tables. Obtain the names of tables and views you are authorized to access (as well as the privileges you have for each table) from your DBA. See Chapter 2, Working with tables and modifying data, on page 19 for more information about creating tables and views.
For details about each CREATE statement, see DB2 SQL Reference.
559
v INSERT ... SELECT (an SQL statement) obtains data from an existing table (based on a SELECT clause) and puts it into the table identified with the INSERT statement. For information about this technique, see Inserting rows into a table from another table on page 29. v The LOAD utility obtains data from a sequential file (a non-DB2 file), formats it for a table, and puts it into a table. For more details about the LOAD utility, see DB2 Utility Guide and Reference. v The DB2 sample UNLOAD program (DSNTIAUL) can unload data from a table or view and build control statements for the LOAD utility. See Appendix C, Running the productivity-aid sample programs, on page 1019 for more information about the sample UNLOAD program. v The UNLOAD utility can unload data from a table and build control statements for the LOAD utility. See Part 2 of DB2 Utility Guide and Reference for more information about the UNLOAD utility.
560
v The abend code and any error messages When your program encounters an error that does not result in an abend, it can pass all the required error information to a standard error routine. Online programs might also send an error message to the terminal.
For more information about the TEST command, see z/OS TSO/E Command Reference. ISPF Dialog Test is another option to help you in the task of debugging.
561
An interactive program also can send a message to the master terminal operator giving information about the programs termination. To do that, the program places the logical terminal name of the master terminal in an express PCB and issues one or more ISRT calls. Some sites run a BMP at the end of the day to list all the errors that occurred during the day. If your location does this, you can send a message using an express PCB that has its destination set for that BMP. Batch Terminal Simulator (BTS): The Batch Terminal Simulator (BTS) allows you to test IMS application programs. BTS traces application program DL/I calls and SQL statements, and simulates data communication functions. It can make a TSO terminal appear as an IMS terminal to the terminal operator, allowing the end user to interact with the application as though it were online. The user can use any application program under the users control to access any database (whether DL/I or DB2) under the users control. Access to DB2 databases requires BTS to operate in batch BMP or TSO BMP mode.
Using CICS facilities, you can have a printed error record; you can also print the SQLCA (and SQLDA) contents.
562
Journals. For statistical or monitoring purposes, facilities can create entries in special data sets called journals. The system log is a journal. Recovery. When an abend occurs, CICS restores certain resources to their original state so that the operator can easily resubmit a transaction for restart. You can use the SYNCPOINT command to subdivide a program so that you only need to resubmit the uncompleted part of a transaction. For more details about each of these topics, see CICS Transaction Server for z/OS Application Programming Reference.
ENTER: CONTINUE PF1 : UNDEFINED PF4 : SUPPRESS DISPLAYS PF7 : SCROLL BACK PF10: PREVIOUS DISPLAY
563
v EXEC SQL statement type This is the type of SQL statement to execute. The SQL statement can be any valid SQL statement, such as COMMIT, DROP TABLE, EXPLAIN, FETCH, or OPEN. v DBRM=dbrm name The name of the database request module (DBRM) currently processing. The DBRM, created by the DB2 precompiler, contains information about an SQL statement. v STMT=statement number This is the DB2 precompiler-generated statement number. The source and error message listings from the precompiler use this statement number, and you can use it to determine which statement is processing. This number is a source line counter that includes host language statements. A statement number greater than 32,767 displays as 0. v SECT=section number The section number of the plan that the SQL statement uses. SQL statements containing input host variables: The IVAR (input host variables) section and its attendant fields only appear when the executing statement contains input host variables. The host variables section includes the variables from predicates, the values used for inserting or updating, and the text of dynamic SQL statements being prepared. The address of the input variable is AT nnnnnnnn. Additional host variable information: v TYPE=data type Specifies the data type for this host variable. The basic data types include character string, graphic string, binary integer, floating-point, decimal, date, time, and timestamp. For additional information, see Data types on page 4. v LEN=length Length of the host variable. v IND=indicator variable status number Represents the indicator variable associated with this particular host variable. A value of zero indicates that no indicator variable exists. If the value for the selected column is null, DB2 puts a negative value in the indicator variable for this host variable. For additional information, see Using indicator variables with host variables on page 83. v DATA=host variable data The data, displayed in hexadecimal format, associated with this host variable. If the data exceeds what can display on a single line, three periods (...) appear at the far right to indicate more data is present. EDF after execution: Figure 172 on page 565 shows an example of the first EDF screen displayed after the executing an SQL statement. The names of the key information fields on this panel are in boldface.
564
TRANSACTION: XC05 PROGRAM: TESTC05 TASK NUMBER: 0000698 DISPLAY: 00 STATUS: COMMAND EXECUTION COMPLETE CALL TO RESOURCE MANAGER DSNCSQL EXEC SQL FETCH P.AUTH=SYSADM , S.AUTH= PLAN=TESTC05, DBRM=TESTC05, STMT=00346, SECT=00001 SQL COMMUNICATION AREA: SQLCABC = 136 AT X03C92789 SQLCODE = 000 AT X03C9278D SQLERRML = 000 AT X03C92791 SQLERRMC = AT X03C92793 SQLERRP = DSN AT X03C927D9 SQLERRD(1-6) = 000, 000, 00000, -1, 00000, 000 AT X03C927E1 SQLWARN(0-A) = _ _ _ _ _ _ _ _ _ _ _ AT X03C927F9 SQLSTATE = 00000 AT X03C92804 + OVAR 001: TYPE=INTEGER, LEN=00004, IND=000 AT X03C920A0 DATA=X00000001 OFFSET:X001D14 LINE:UNKNOWN EIBFN=X1802
CONTINUE UNDEFINED PF2 : UNDEFINED SUPPRESS DISPLAYS PF5 : WORKING STORAGE SCROLL BACK PF8 : SCROLL FORWARD PREVIOUS DISPLAY PF11: UNDEFINED
END EDF SESSION USER DISPLAY STOP CONDITIONS ABEND USER TASK
The DB2 SQL information in this screen is as follows: v P.AUTH=primary authorization ID The primary DB2 authorization ID. v S.AUTH=secondary authorization ID If the RACF list of group options is not active, then DB2 uses the connected group name that the CICS attachment facility supplies as the secondary authorization ID. If the RACF list of group options is active, then DB2 ignores the connected group name that the CICS attachment facility supplies, but the value appears in the DB2 list of secondary authorization IDs. v PLAN=plan name The name of plan that is currently running. The PLAN represents the control structure produced during the bind process and used by DB2 to process SQL statements encountered while the application is running. v SQL Communication Area (SQLCA) The SQLCA contains information about errors, if any occur. After returning from DB2, the information is available. DB2 uses the SQLCA to give an application program information about the executing SQL statements. Plus signs (+) on the left of the screen indicate that you can see additional EDF output by using PF keys to scroll the screen forward or back. The OVAR (output host variables) section and its attendant fields only appear when the executing statement returns output host variables. Figure 173 on page 566 contains the rest of the EDF output for our example.
565
TRANSACTION: XC05 PROGRAM: TESTC05 TASK NUMBER: 0000698 DISPLAY: 00 STATUS: COMMAND EXECUTION COMPLETE CALL TO RESOURCE MANAGER DSNCSQL + OVAR 002: TYPE=CHAR, LEN=00008, IND=000 AT X03C920B0 DATA=XC8F3E3E3C1C2D3C5 OVAR 003: TYPE=CHAR, LEN=00040, IND=000 AT X03C920B8 DATA=XC9D5C9E3C9C1D340D3D6C1C440404040404040404040404040404040...
OFFSET:X001D14
LINE:UNKNOWN
EIBFN=X1802
ENTER: CONTINUE PF1 : UNDEFINED PF4 : SUPPRESS DISPLAYS PF7 : SCROLL BACK PF10: PREVIOUS DISPLAY
END EDF SESSION USER DISPLAY STOP CONDITIONS ABEND USER TASK
The attachment facility automatically displays SQL information while in the EDF mode. (You can start EDF as outlined in the appropriate CICS application programmers reference manual.) If this is not the case, contact your installer and see Part 2 of DB2 Installation Guide.
566
v Your JCL.
IMS If you are using IMS, have you included the DL/I option statement in the correct format? Have you included the region size parameter in the EXEC statement? Does it specify a region size large enough for the storage required for the DB2 interface, the TSO, IMS, or CICS system, and your program? Have you included the names of all data sets (DB2 and non-DB2) that the program requires? v Your program. You can also use dumps to help localize problems in your program. For example, one of the more common error situations occurs when your program is running and you receive a message that it abended. In this instance, your test procedure might be to capture a TSO dump. To do so, you must allocate a SYSUDUMP or SYSABEND dump data set before calling DB2. When you press the ENTER key (after the error message and READY message), the system requests a dump. You then need to FREE the dump data set.
567
MESSAGES
DSNH104I E DSNHPARS LINE 32 COL 26 ILLEGAL SYMBOL "X" VALID SYMBOLS ARE:, FROM1 SELECT VALUE INTO HIPPO X;2 DB2 SQL PRECOMPILER STATISTICS SOURCE STATISTICS3 SOURCE LINES READ: 36 NUMBER OF SYMBOLS: 15 SYMBOL TABLE BYTES EXCLUDING ATTRIBUTES: 1848 THERE WERE 1 MESSAGES FOR THIS PROGRAM.4 THERE WERE 0 MESSAGES SUPPRESSED BY THE FLAG OPTION.5 111664 BYTES OF STORAGE WERE USED BY THE PRECOMPILER.6 RETURN CODE IS 87
Notes for Figure 174: 1. Error message. Source SQL statement. Summary statements of source statistics. Summary statement of the number of errors detected. Summary statement indicating the number of errors detected but not printed. That value might occur if you specify a FLAG option other than I. 6. Storage requirement statement telling you how many bytes of working storage that the DB2 precompiler actually used to process your source statements. That value helps you determine the storage allocation requirements for your program. 7. Return code: 0 = success, 4 = warning, 8 = error, 12 = severe error, and 16 = unrecoverable error. 2. 3. 4. 5.
568
Version 8
OPTIONS SPECIFIED: HOST(PLI),XREF,SOURCE1 OPTIONS USED - SPECIFIED OR DEFAULTED2 APOST APOSTSQL CONNECT(2) DEC(15) FLAG(I) NOGRAPHIC HOST(PLI) NOT KATAKANA LINECOUNT(60) MARGINS(2,72) ONEPASS OPTIONS PERIOD SOURCE STDSQL(NO) SQL(DB2) XREF
Notes for Figure 175: 1. This section lists the options specified at precompilation time. This list does not appear if one of the precompiler option is NOOPTIONS. 2. This section lists the options that are in effect, including defaults, forced values, and options you specified. The DB2 precompiler overrides or ignores any options you specify that are inappropriate for the host language. v A listing (Figure 176) of your source statements (only if you specified the SOURCE option).
DB2 SQL PRECOMPILER 1 2 3 . . . 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 . . . 1523 END; 00152300 /*************************************************/ /* GET INFORMATION ABOUT THE PROJECT FROM THE */ /* PROJECT TABLE. */ /*************************************************/ EXEC SQL SELECT ACTNO, PREQPROJ, PREQACT INTO PROJ_DATA FROM TPREREQ WHERE PROJNO = :PROJ_NO; /*************************************************/ /* PROJECT IS FINISHED. DELETE IT. */ /*************************************************/ EXEC SQL DELETE FROM PROJ WHERE PROJNO = :PROJ_NO; 00132400 00132500 00132600 00132700 00132800 00132900 00133000 00133100 00133200 00133300 00133400 00133500 00133600 00133700 00133800 TMN5P40:PROCEDURE OPTIONS (MAIN): PAGE 2
Notes for Figure 176: The left column of sequence numbers, which the DB2 precompiler generates, is for use with the symbol cross-reference listing, the precompiler error messages, and the BIND error messages.
Chapter 22. Testing an application program
569
The right column of sequence numbers come from the sequence numbers supplied with your source statements. v A list (Figure 177) of the symbolic names used in SQL statements (this listing appears only if you specify the XREF option).
DB2 SQL PRECOMPILER DATA NAMES "ACTNO" "PREQACT" "PREQPROJ" "PROJNO" SYMBOL CROSS-REFERENCE LISTING DEFN **** **** **** **** REFERENCE FIELD 1328 FIELD 1328 FIELD 1328 FIELD 1331 1338 PAGE 29
... PROJ_DATA PROJ_NO "TPREREQ" 495 496 **** CHARACTER(35) 1329 CHARACTER(3) 1331 1338 TABLE 1330 1337
Notes for Figure 177: DATA NAMES Identifies the symbolic names used in source statements. Names enclosed in quotation marks () or apostrophes () are names of SQL entities such as tables, columns, and authorization IDs. Other names are host variables. DEFN Is the number of the line that the precompiler generates to define the name. **** means that the object was not defined or the precompiler did not recognize the declarations. REFERENCE Contains two kinds of information: what the source program defines the symbolic name to be, and which lines refer to the symbolic name. If the symbolic name refers to a valid host variable, the list also identifies the data type or STRUCTURE. v A summary (Figure 178) of the errors detected by the DB2 precompiler and a list of the error messages generated by the precompiler.
DB2 SQL PRECOMPILER STATISTICS
SOURCE STATISTICS SOURCE LINES READ: 15231 NUMBER OF SYMBOLS: 1282 SYMBOL TABLE BYTES EXCLUDING ATTRIBUTES: 64323 THERE WERE 1 MESSAGES FOR THIS PROGRAM.4 THERE WERE 0 MESSAGES SUPPRESSED.5 65536 BYTES OF STORAGE WERE USED BY THE PRECOMPILER.6 RETURN CODE IS 8.7 DSNH104I E LINE 590 COL 64 ILLEGAL SYMBOL: X; VALID SYMBOLS ARE:,FROM8
570
1. Summary statement indicating the number of source lines. 2. Summary statement indicating the number of symbolic names in the symbol table (SQL names and host names). 3. Storage requirement statement indicating the number of bytes for the symbol table. 4. Summary statement indicating the number of messages printed. 5. Summary statement indicating the number of errors detected but not printed. You might get this statement if you specify the option FLAG. 6. Storage requirement statement indicating the number of bytes of working storage actually used by the DB2 precompiler to process your source statements. 7. Return code 0 = success, 4 = warning, 8 = error, 12 = severe error, and 16 = unrecoverable error. 8. Error messages (this example detects only one error).
571
572
573
In a data sharing environment, DL/I batch supports group attachment. You can specify a group attachment name instead of a subsystem name in the SSN parameter of the DDITV02 data set for the DL/I batch job. See DB2 DL/I batch input on page 576 for information about the SSN parameter and the DDITV02 data set.
Authorization
When the batch application tries to run the first SQL statement, DB2 checks whether the authorization ID has the EXECUTE privilege for the plan. DB2 uses the same ID for later authorization checks and also identifies records from the accounting and performance traces. The primary authorization ID is the value of the USER parameter on the job statement, if that is available. It is the TSO logon name if the job is submitted. Otherwise, it is the IMS PSB name. In that case, however, the ID must not begin with the string SYSADM because this string causes the job to abend. The batch job is rejected if you try to change the authorization ID in an exit routine.
Address spaces
A DL/I batch region is independent of both the IMS control region and the CICS address space. The DL/I batch region loads the DL/I code into the application region along with the application program.
574
Commits
Commit IMS batch applications frequently so that you do not use resources for an extended time. If you need coordinated commits for recovery, see Part 4 (Volume 1) of DB2 Administration Guide.
Checkpoint calls
Write your program with SQL statements and DL/I calls, and use checkpoint calls. All checkpoints issued by a batch application program must be unique. The frequency of checkpoints depends on the application design. At a checkpoint, DL/I positioning is lost, DB2 cursors are closed (with the possible exception of cursors defined as WITH HOLD), commit duration locks are freed (again with some exceptions), and database changes are considered permanent to both IMS and DB2.
575
You can specify values for the following parameters only in a DDITV02 data set:
CONNECTION_NAME,PLAN,PROG
If you use the DDITV02 data set and specify a subsystem member, the values in the DDITV02 DD statement override the values in the specified subsystem member. If you provide neither, DB2 abends the application program with system abend code X'04E' and a unique reason code in register 15. DDITV02 is the DD name for a data set that has DCB options of LRECL=80 and RECFM=F or FB. A subsystem member is a member in the IMS procedure library. Its name is derived by concatenating the value of the SSM parameter to the value of the IMSID parameter. You specify the SSM parameter and the IMSID parameter when you invoke the DLIBATCH procedure, which starts the DL/I batch processing environment. The meanings of the input parameters are: Field SSN Content The name of the DB2 subsystem is required. You must specify a name in order to make a connection to DB2. The SSN value can be from one to four characters long. If the value in the SSN parameter is the name of an active subsystem in the data sharing group, the application attaches to that subsystem. If the SSN parameter value is not the name of an active subsystem, but the value is a group attachment name, the application attaches to an active DB2 subsystem in the data sharing group. See Chapter 2 of DB2 Data Sharing: Planning and Administration for more information about group attachment. LIT DB2 requires a language interface token to route SQL statements when operating in the online IMS environment. Because a batch application
576
program can only connect to one DB2 system, DB2 does not use the LIT value. It is recommended that you specify the value as SYS1; however, you can omit it (enter SSN,,ESMT). The LIT value can be from zero to four characters long. ESMT The name of the DB2 initialization module, DSNMIN10, is required. The ESMT value must be eight characters long. RTT Specifying the resource translation table is optional. The RTT can be from zero to eight characters long. REO The region error option determines what to do if DB2 is not operational or the plan is not available. There are three options: v R, the default, results in returning an SQL return code to the application program. The most common SQLCODE issued in this case is -923 (SQLSTATE 57015). v Q results in an abend in the batch environment; however, in the online environment, it places the input message in the queue again. v A results in an abend in both the batch environment and the online environment. If the application program uses the XRST call, and if coordinated recovery is required on the XRST call, then REO is ignored. In that case, the application program terminates abnormally if DB2 is not operational. The REO value can be from zero to one character long. CRC Because DB2 commands are not supported in the DL/I batch environment, the command recognition character is not used at this time. The CRC value can be from zero to one character long. CONNECTION_NAME The connection name is optional. It represents the name of the job step that coordinates DB2 activities. If you do not specify this option, the connection name defaults are: Type of Application Batch job Started task TSO user Default Connection Name Job name Started task name TSO authorization ID
If a batch update job fails, you must use a separate job to restart the batch job. The connection name used in the restart job must be the same as the name used in the batch job that failed. Alternatively, if the default connection name is used, the restart job must have the same job name as the batch update job that failed. DB2 requires unique connection names. If two applications try to connect with the same connection name, the second application program fails to connect to DB2. The CONNECTION_NAME value can be from 1 to 8 characters long. PLAN The DB2 plan name is optional. If you do not specify the plan name, then the application program module name is checked against the optional resource translation table. If there is a match in the resource translation
577
table, the translated name is used as the DB2 plan name. If there is no match, then the application program module name is used as the plan name. The PLAN value can be from 0 to 8 characters long. PROG The application program name is required. It identifies the application program that is to be loaded and to receive control. The PROG value can be from 1 to 8 characters long. Example: An example of the fields in the record is shown below:
DSN,SYS1,DSNMIN10,,R,-,BATCH001,DB2PLAN,PROGA
Precompiling
When you add SQL statements to an application program, you must precompile the application program and bind the resulting DBRM into a plan or package, as described in Chapter 21, Preparing an application program to run, on page 471.
Binding
The owner of the plan or package must have all the privileges required to execute the SQL statements embedded in it. Before a batch program can issue SQL statements, a DB2 plan must exist. You can specify the plan name to DB2 in one of the following ways: v In the DDITV02 input data set. v In subsystem member specification. v By default; the plan name is then the application load module name specified in DDITV02. DB2 passes the plan name to the IMS attach package. If you do not specify a plan name in DDITV02, and a resource translation table (RTT) does not exist or the
578
name is not in the RTT, then DB2 uses the passed name as the plan name. If the name exists in the RTT, then the name translates to the plan specified for the RTT. Recommendation: Give the DB2 plan the same name as that of the application load module, which is the IMS attach default. The plan name must be the same as the program name.
Link-editing
DB2 has language interface routines for each unique supported environment. DB2 requires the IMS language interface routine for DL/I batch. It is also necessary to have DFSLI000 link-edited with the application program.
579
//G.STEPCAT DD DSN=IMSCAT,DISP=SHR //G.DDOTV02 DD DSN=&TEMP1,DISP=(NEW,PASS,DELETE), // SPACE=(TRK,(1,1),RLSE),UNIT=SYSDA, // DCB=(RECFM=VB,BLKSIZE=4096,LRECL=4092) //G.DDITV02 DD * SSDQ,SYS1,DSNMIN10,,A,-,BATCH001,,IVP8CP22 /* //*************************************************************** //*** ALWAYS ATTEMPT TO PRINT OUT THE DDOTV02 DATA SET *** //*************************************************************** //STEP3 EXEC PGM=DFSERA10,COND=EVEN //STEPLIB DD DSN=IMS.RESLIB,DISP=SHR //SYSPRINT DD SYSOUT=A //SYSUT1 DD DSNAME=&TEMP1,DISP=(OLD,DELETE) //SYSIN DD * CONTROL CNTL K=000,H=8000 OPTION PRINT /* //
580
* * * * * *
581
//* other program libraries //* G.IEFRDER data set required //G.STEPCAT DD DSN=IMSCAT,DISP=SHR //* G.IMSLOGR data set required //G.DDOTV02 DD DSN=&TEMP2,DISP=(NEW,PASS,DELETE), // SPACE=(TRK,(1,1),RLSE),UNIT=SYSDA, // DCB=(RECFM=VB,BLKSIZE=4096,LRECL=4092) //G.DDITV02 DD * DB2X,SYS1,DSNMIN10,,A,-,BATCH001,,IVP8CP22 /* //*************************************************************** //*** ALWAYS ATTEMPT TO PRINT OUT THE DDOTV02 DATA SET *** //*************************************************************** //STEP8 EXEC PGM=DFSERA10,COND=EVEN //STEPLIB DD DSN=IMS.RESLIB,DISP=SHR //SYSPRINT DD SYSOUT=A //SYSUT1 DD DSNAME=&TEMP2,DISP=(OLD,DELETE) //SYSIN DD * CONTROL CNTL K=000,H=8000 OPTION PRINT /* //
582
583
Describing tables with LOB and distinct type columns . . . . . Executing a varying-list SELECT statement dynamically . . . . . . Open the cursor . . . . . . . . . . . . . . . . . . Fetch rows from the result table . . . . . . . . . . . . . Close the cursor . . . . . . . . . . . . . . . . . . Executing arbitrary statements with parameter markers . . . . . . When the number and types of parameters are known . . . . . When the number and types of parameters are not known . . . . Using the SQLDA with EXECUTE or OPEN . . . . . . . . . How bind options REOPT(ALWAYS) and REOPT(ONCE) affect dynamic Using dynamic SQL in COBOL . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . SQL . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
621 623 623 623 623 624 624 624 625 625 627
Chapter 25. Using stored procedures for client/server processing . . . . . . . . . . . . . . . 629 Introduction to stored procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 An example of a simple stored procedure . . . . . . . . . . . . . . . . . . . . . . . . . 630 Setting up the stored procedures environment . . . . . . . . . . . . . . . . . . . . . . . 634 Defining your stored procedure to DB2 . . . . . . . . . . . . . . . . . . . . . . . . . 635 Passing environment information to the stored procedure . . . . . . . . . . . . . . . . . 636 Example of a stored procedure definition . . . . . . . . . . . . . . . . . . . . . . . 638 Refreshing the stored procedures environment (for system administrators) . . . . . . . . . . . . . 639 Moving stored procedures to a WLM-established environment (for system administrators) . . . . . . . 640 Writing and preparing an external stored procedure . . . . . . . . . . . . . . . . . . . . . 641 Language requirements for the stored procedure and its caller . . . . . . . . . . . . . . . . . 641 Calling other programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642 Using reentrant code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642 Writing a stored procedure as a main program or subprogram . . . . . . . . . . . . . . . . . 643 Restrictions on a stored procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 646 Using COMMIT and ROLLBACK statements in a stored procedure . . . . . . . . . . . . . . . 646 Using special registers in a stored procedure . . . . . . . . . . . . . . . . . . . . . . . 647 Accessing other sites in a stored procedure . . . . . . . . . . . . . . . . . . . . . . . 649 Writing a stored procedure to access IMS databases . . . . . . . . . . . . . . . . . . . . 650 Writing a stored procedure to return result sets to a DRDA client . . . . . . . . . . . . . . . . 650 Preparing a stored procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652 Binding the stored procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Writing a REXX stored procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 654 Writing and preparing an SQL procedure . . . . . . . . . . . . . . . . . . . . . . . . . 657 Comparison of an SQL procedure and an external procedure . . . . . . . . . . . . . . . . . 658 Statements that you can include in a procedure body . . . . . . . . . . . . . . . . . . . . 659 Declaring and using variables, parameters, and conditions in an SQL procedure . . . . . . . . . . . 661 # Parameter style for an SQL procedure . . . . . . . . . . . . . . . . . . . . . . . . . 662 Terminating statements in an SQL procedure . . . . . . . . . . . . . . . . . . . . . . . 662 Handling SQL conditions in an SQL procedure . . . . . . . . . . . . . . . . . . . . . . 663 Using handlers in an SQL procedure . . . . . . . . . . . . . . . . . . . . . . . . 663 Using the RETURN statement for the procedure status . . . . . . . . . . . . . . . . . . 665 | Using SIGNAL or RESIGNAL to raise a condition . . . . . . . . . . . . . . . . . . . . 665 | Forcing errors in an SQL procedure when called by a trigger . . . . . . . . . . . . . . . . 667 Examples of SQL procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 Preparing an SQL procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669 Using the DB2 UDB for z/OS SQL procedure processor to prepare an SQL procedure . . . . . . . . 670 Using JCL to prepare an SQL procedure . . . . . . . . . . . . . . . . . . . . . . . 680 Sample programs to help you prepare and run SQL procedures . . . . . . . . . . . . . . . 680 Writing and preparing an application to use stored procedures . . . . . . . . . . . . . . . . . . 681 Forms of the CALL statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682 Authorization for executing stored procedures . . . . . . . . . . . . . . . . . . . . . . 683 Linkage conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684 Example of stored procedure linkage convention GENERAL . . . . . . . . . . . . . . . . 686 Example of stored procedure linkage convention GENERAL WITH NULLS . . . . . . . . . . . 689 Example of stored procedure linkage convention SQL . . . . . . . . . . . . . . . . . . . 694 | Special considerations for C . . . . . . . . . . . . . . . . . . . . . . . . . . . 702 Special considerations for PL/I . . . . . . . . . . . . . . . . . . . . . . . . . . 703 Using indicator variables to speed processing . . . . . . . . . . . . . . . . . . . . . . 703
584
Declaring data types for passed parameters . . . . . . . . . . . . . . . Writing a DB2 UDB for z/OS client program or SQL procedure to receive result sets . Accessing transition tables in a stored procedure . . . . . . . . . . . . . Calling a stored procedure from a REXX procedure . . . . . . . . . . . . Preparing a client program . . . . . . . . . . . . . . . . . . . . Running a stored procedure . . . . . . . . . . . . . . . . . . . . . How DB2 determines which version of a stored procedure to run . . . . . . . . Using a single application program to call different versions of a stored procedure . . Running multiple stored procedures concurrently . . . . . . . . . . . . . # Multiple instances of a stored procedure . . . . . . . . . . . . . . . . Accessing non-DB2 resources . . . . . . . . . . . . . . . . . . . . Testing a stored procedure . . . . . . . . . . . . . . . . . . . . . . Debugging the stored procedure as a stand-alone program on a workstation . . . . Debugging with the Debug Tool and IBM VisualAge COBOL . . . . . . . . . Debugging an SQL procedure or C language stored procedure with the Debug Tool and Tools for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . Debugging with Debug Tool for z/OS interactively and in batch mode . . . . . . | Using the MSGFILE run-time option . . . . . . . . . . . . . . . . . Using driver applications . . . . . . . . . . . . . . . . . . . . . Using SQL INSERT statements . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C/C++ Productivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
703 708 714 714 718 719 720 720 722 722 723 725 725 725 726 727 728 729 729
| |
| | |
Chapter 26. Tuning your queries . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 General tips and questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 Is the query coded as simply as possible? . . . . . . . . . . . . . . . . . . . . . . . . 731 Are all predicates coded correctly? . . . . . . . . . . . . . . . . . . . . . . . . . . 731 Are there subqueries in your query? . . . . . . . . . . . . . . . . . . . . . . . . . . 732 Does your query involve aggregate functions? . . . . . . . . . . . . . . . . . . . . . . 733 Do you have an input variable in the predicate of an SQL query? . . . . . . . . . . . . . . . . 734 Do you have a problem with column correlation? . . . . . . . . . . . . . . . . . . . . . 734 Can your query be written to use a noncolumn expression? . . . . . . . . . . . . . . . . . . 734 Can materialized query tables help your query performance? . . . . . . . . . . . . . . . . . 734 Does the query contain encrypted data? . . . . . . . . . . . . . . . . . . . . . . . . 735 Writing efficient predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 Properties of predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 Predicate types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736 Indexable and nonindexable predicates . . . . . . . . . . . . . . . . . . . . . . . . 737 Stage 1 and stage 2 predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . 737 Boolean term (BT) predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . 738 Predicates in the ON clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738 General rules about predicate evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 739 Order of evaluating predicates. . . . . . . . . . . . . . . . . . . . . . . . . . . . 739 Summary of predicate processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 740 Examples of predicate properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 745 Predicate filter factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746 Default filter factors for simple predicates . . . . . . . . . . . . . . . . . . . . . . . 747 Filter factors for uniform distributions . . . . . . . . . . . . . . . . . . . . . . . . 747 Interpolation formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748 Filter factors for all distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 749 Using multiple filter factors to determine the cost of a query . . . . . . . . . . . . . . . . 751 Column correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752 How to detect column correlation . . . . . . . . . . . . . . . . . . . . . . . . . 752 Impacts of column correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . 753 What to do about column correlation . . . . . . . . . . . . . . . . . . . . . . . . 755 DB2 predicate manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755 Predicate modifications for IN-list predicates . . . . . . . . . . . . . . . . . . . . . . 756 When DB2 simplifies join operations . . . . . . . . . . . . . . . . . . . . . . . . 756 Predicates generated through transitive closure . . . . . . . . . . . . . . . . . . . . . 757 Predicates with encrypted data . . . . . . . . . . . . . . . . . . . . . . . . . . . 759 Using host variables efficiently . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759 Changing the access path at run time . . . . . . . . . . . . . . . . . . . . . . . . . 760 The REOPT(ALWAYS) bind option . . . . . . . . . . . . . . . . . . . . . . . . . 760
Part 6.Additional programming techniques
585
| |
The REOPT(ONCE) bind option . . . . . . . . . . . . . . . . . . . The REOPT(NONE) bind option . . . . . . . . . . . . . . . . . . . Rewriting queries to influence access path selection . . . . . . . . . . . . . Writing efficient subqueries . . . . . . . . . . . . . . . . . . . . . . . Correlated subqueries . . . . . . . . . . . . . . . . . . . . . . . Noncorrelated subqueries . . . . . . . . . . . . . . . . . . . . . . Single-value subqueries . . . . . . . . . . . . . . . . . . . . . . Multiple-value subqueries . . . . . . . . . . . . . . . . . . . . . Conditions for DB2 to transform a subquery into a join . . . . . . . . . . . . Subquery tuning . . . . . . . . . . . . . . . . . . . . . . . . . Using scrollable cursors efficiently . . . . . . . . . . . . . . . . . . . . | Writing efficient queries on tables with data-partitioned secondary indexes . . . . . . . Special techniques to influence access path selection . . . . . . . . . . . . . . Obtaining information about access paths . . . . . . . . . . . . . . . . . Fetching a limited number of rows: FETCH FIRST n ROWS ONLY . . . . . . . . Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS . . . . . . Favoring index access . . . . . . . . . . . . . . . . . . . . . . . | Using a subsystem parameter to control outer join processing . . . . . . . . . . # Using the CARDINALITY clause to improve the performance of queries with user-defined | references . . . . . . . . . . . . . . . . . . . . . . . . . . . | Reducing the number of matching columns . . . . . . . . . . . . . . . . Creating indexes for efficient star-join processing . . . . . . . . . . . . . . Recommendations for creating indexes for star-join queries . . . . . . . . . . Determining the order of columns in an index for a star schema design . . . . . . Rearranging the order of tables in a FROM clause . . . . . . . . . . . . . . Updating catalog statistics . . . . . . . . . . . . . . . . . . . . . . Using a subsystem parameter . . . . . . . . . . . . . . . . . . . . . Using a subsystem parameter to favor matching index access . . . . . . . . . Using a subsystem parameter to optimize queries with IN-list predicates . . . . . | Chapter 27. Using EXPLAIN to improve SQL performance . . . . . . Obtaining PLAN_TABLE information from EXPLAIN . . . . . . . . . Creating PLAN_TABLE . . . . . . . . . . . . . . . . . . Populating and maintaining a plan table . . . . . . . . . . . . Executing the SQL statement EXPLAIN . . . . . . . . . . . Binding with the option EXPLAIN(YES) . . . . . . . . . . . Maintaining a plan table. . . . . . . . . . . . . . . . . Reordering rows from a plan table . . . . . . . . . . . . . . Retrieving rows for a plan . . . . . . . . . . . . . . . . Retrieving rows for a package . . . . . . . . . . . . . . . Asking questions about data access . . . . . . . . . . . . . . . Is access through an index? (ACCESSTYPE is I, I1, N or MX) . . . . . Is access through more than one index? (ACCESSTYPE=M) . . . . . . How many columns of the index are used in matching? (MATCHCOLS=n) Is the query satisfied using only the index? (INDEXONLY=Y) . . . . . Is direct row access possible? (PRIMARY_ACCESSTYPE = D) . . . . . Which predicates qualify for direct row access? . . . . . . . . . Reverting to ACCESSTYPE . . . . . . . . . . . . . . . . Using direct row access and other access methods . . . . . . . . Example: Coding with row IDs for direct row access . . . . . . . Is a view or nested table expression materialized? . . . . . . . . . Was a scan limited to certain partitions? (PAGE_RANGE=Y) . . . . . What kind of prefetching is expected? (PREFETCH = L, S, D, or blank) . . Is data accessed or processed in parallel? (PARALLELISM_MODE is I, C, or Are sorts performed? . . . . . . . . . . . . . . . . . . . Is a subquery transformed into a join? . . . . . . . . . . . . . When are aggregate functions evaluated? (COLUMN_FN_EVAL) . . . . How many index screening columns are used? . . . . . . . . . . Is a complex trigger WHEN clause used? (QBLOCKTYPE=TRIGGR) . . . Interpreting access to a single table . . . . . . . . . . . . . . . Table space scans (ACCESSTYPE=R PREFETCH=S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X) . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . table function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
761 762 763 766 766 767 767 768 768 770 771 772 774 775 775 776 778 779 779 780 782 782 783 784 785 786 786 787
| |
. . . . . . . . . . . 789 . . . . . . . . . . . 790 . . . . . . . . . . . 791 . . . . . . . . . . . 798 . . . . . . . . . . . 798 . . . . . . . . . . . 799 . . . . . . . . . . . 799 . . . . . . . . . . . 799 . . . . . . . . . . . 799 . . . . . . . . . . . 800 . . . . . . . . . . . 800 . . . . . . . . . . . 801 . . . . . . . . . . . 801 . . . . . . . . . . . 802 . . . . . . . . . . . 802 . . . . . . . . . . . 803 . . . . . . . . . . . 803 . . . . . . . . . . . 804 . . . . . . . . . . . 804 . . . . . . . . . . . 805 . . . . . . . . . . . 806 . . . . . . . . . . . 806 . . . . . . . . . . . 807 . . . . . . . . . . . 807 . . . . . . . . . . . 807 . . . . . . . . . . . 808 . . . . . . . . . . . 808 . . . . . . . . . . . 808 . . . . . . . . . . . 809 . . . . . . . . . . . 809 . . . . . . . . . . . 809
586
| |
Table space scans of nonsegmented table spaces . . . . . . Table space scans of segmented table spaces . . . . . . . Table space scans of partitioned table spaces . . . . . . . Table space scans and sequential prefetch . . . . . . . . Index access paths . . . . . . . . . . . . . . . . . Matching index scan (MATCHCOLS>0) . . . . . . . . Index screening . . . . . . . . . . . . . . . . . Nonmatching index scan (ACCESSTYPE=I and MATCHCOLS=0) IN-list index scan (ACCESSTYPE=N) . . . . . . . . . Multiple index access (ACCESSTYPE is M, MX, MI, or MU) . . One-fetch access (ACCESSTYPE=I1) . . . . . . . . . . Index-only access (INDEXONLY=Y) . . . . . . . . . . Equal unique index (MATCHCOLS=number of index columns) UPDATE using an index . . . . . . . . . . . . . . Interpreting access to two or more tables (join) . . . . . . . . Definitions and examples of join operations . . . . . . . . Nested loop join (METHOD=1) . . . . . . . . . . . . Method of joining . . . . . . . . . . . . . . . . Performance considerations. . . . . . . . . . . . . When nested loop join is used . . . . . . . . . . . . Merge scan join (METHOD=2) . . . . . . . . . . . . Method of joining . . . . . . . . . . . . . . . . Performance considerations. . . . . . . . . . . . . When merge scan join is used . . . . . . . . . . . . Hybrid join (METHOD=4) . . . . . . . . . . . . . . Method of joining . . . . . . . . . . . . . . . . Possible results from EXPLAIN for hybrid join . . . . . . Performance considerations. . . . . . . . . . . . . When hybrid join is used . . . . . . . . . . . . . Star join (JOIN_TYPE=S) . . . . . . . . . . . . . . Example of a star schema . . . . . . . . . . . . . When star join is used . . . . . . . . . . . . . . Dedicated virtual memory pool for star join operations . . . Interpreting data prefetch . . . . . . . . . . . . . . . Sequential prefetch (PREFETCH=S) . . . . . . . . . . . Dynamic prefetch (PREFETCH=D) . . . . . . . . . . . List prefetch (PREFETCH=L) . . . . . . . . . . . . . The access method . . . . . . . . . . . . . . . When list prefetch is used . . . . . . . . . . . . . Bind time and execution time thresholds . . . . . . . . Sequential detection at execution time . . . . . . . . . . When sequential detection is used . . . . . . . . . . How to tell whether sequential detection was used . . . . . How to tell if sequential detection might be used . . . . . Determining sort activity . . . . . . . . . . . . . . . Sorts of data . . . . . . . . . . . . . . . . . . . Sorts for group by and order by . . . . . . . . . . . Sorts to remove duplicates . . . . . . . . . . . . . Sorts used in join processing . . . . . . . . . . . . Sorts needed for subquery processing . . . . . . . . . Sorts of RIDs . . . . . . . . . . . . . . . . . . The effect of sorts on OPEN CURSOR . . . . . . . . . . Processing for views and nested table expressions . . . . . . . Merge . . . . . . . . . . . . . . . . . . . . . Materialization . . . . . . . . . . . . . . . . . . Two steps of materialization . . . . . . . . . . . . When views or table expressions are materialized . . . . . Using EXPLAIN to determine when materialization occurs . . . Using EXPLAIN to determine UNION activity and query rewrite . Performance of merge versus materialization . . . . . . . . Estimating a statements cost . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
810 810 810 810 810 811 811 812 812 812 814 814 814 815 815 815 818 818 818 818 820 820 821 821 821 822 823 823 823 823 824 825 828 830 830 831 831 831 832 832 832 833 833 833 834 834 834 835 835 835 835 835 836 836 837 837 837 839 840 842 842
587
Creating a statement table . . . . . Populating and maintaining a statement Retrieving rows from a statement table . The implications of cost categories . .
. . table . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Chapter 28. Parallel operations and query performance . . . Comparing the methods of parallelism . . . . . . . . . . Enabling parallel processing . . . . . . . . . . . . . When parallelism is not used . . . . . . . . . . . . . Interpreting EXPLAIN output . . . . . . . . . . . . . A method for examining PLAN_TABLE columns for parallelism PLAN_TABLE examples showing parallelism. . . . . . . Tuning parallel processing . . . . . . . . . . . . . . Disabling query parallelism . . . . . . . . . . . . .
Chapter 29. Programming for the Interactive System Productivity Facility . . . . . . . . . . . . 857 Using ISPF and the DSN command processor . . . . . . . . . . . . . . . . . . . . . . . 857 Invoking a single SQL program through ISPF and DSN . . . . . . . . . . . . . . . . . . . . 858 Invoking multiple SQL programs through ISPF and DSN . . . . . . . . . . . . . . . . . . . . 859 Invoking multiple SQL programs through ISPF and CAF . . . . . . . . . . . . . . . . . . . . 859 Chapter 30. Programming for the call attachment CAF capabilities and requirements . . . . . . CAF capabilities . . . . . . . . . . . Task capabilities . . . . . . . . . . Programming language . . . . . . . . Tracing facility . . . . . . . . . . . Program preparation . . . . . . . . . CAF requirements . . . . . . . . . . . Program size . . . . . . . . . . . Use of LOAD . . . . . . . . . . . Using CAF in IMS batch . . . . . . . Run environment . . . . . . . . . . Running DSN applications under CAF . . . How to use CAF . . . . . . . . . . . . Summary of connection functions . . . . . Implicit connections . . . . . . . . . Accessing the CAF language interface . . . . Explicit load of DSNALI . . . . . . . . Link-editing DSNALI . . . . . . . . . General properties of CAF connections . . . . Task termination . . . . . . . . . . DB2 abend . . . . . . . . . . . . CAF function descriptions . . . . . . . . Register conventions . . . . . . . . . Call DSNALI parameter list . . . . . . CONNECT: Syntax and usage . . . . . . . OPEN: Syntax and usage . . . . . . . . CLOSE: Syntax and usage . . . . . . . . DISCONNECT: Syntax and usage . . . . . TRANSLATE: Syntax and usage . . . . . . Summary of CAF behavior . . . . . . . . Sample scenarios . . . . . . . . . . . . A single task with implicit connections . . . . A single task with explicit connections . . . . Several tasks . . . . . . . . . . . . Exit routines from your application . . . . . . Attention exit routines . . . . . . . . . Recovery routines . . . . . . . . . . . Error messages and dsntrace . . . . . . . . CAF return codes and reason codes . . . . . . facility . . . . . . . . . . . . . . . . . . 861 . . . . . . . . . . . . . . . . . . . . . 861 . . . . . . . . . . . . . . . . . . . . . 861 . . . . . . . . . . . . . . . . . . . . . 862 . . . . . . . . . . . . . . . . . . . . . 862 . . . . . . . . . . . . . . . . . . . . . 862 . . . . . . . . . . . . . . . . . . . . . 862 . . . . . . . . . . . . . . . . . . . . . 863 . . . . . . . . . . . . . . . . . . . . . 863 . . . . . . . . . . . . . . . . . . . . . 863 . . . . . . . . . . . . . . . . . . . . . 863 . . . . . . . . . . . . . . . . . . . . . 863 . . . . . . . . . . . . . . . . . . . . . 863 . . . . . . . . . . . . . . . . . . . . . 864 . . . . . . . . . . . . . . . . . . . . . 866 . . . . . . . . . . . . . . . . . . . . . 866 . . . . . . . . . . . . . . . . . . . . . 867 . . . . . . . . . . . . . . . . . . . . . 867 . . . . . . . . . . . . . . . . . . . . . 868 . . . . . . . . . . . . . . . . . . . . . 868 . . . . . . . . . . . . . . . . . . . . . 868 . . . . . . . . . . . . . . . . . . . . . 869 . . . . . . . . . . . . . . . . . . . . . 869 . . . . . . . . . . . . . . . . . . . . . 869 . . . . . . . . . . . . . . . . . . . . . 869 . . . . . . . . . . . . . . . . . . . . . 871 . . . . . . . . . . . . . . . . . . . . . 875 . . . . . . . . . . . . . . . . . . . . . 877 . . . . . . . . . . . . . . . . . . . . . 878 . . . . . . . . . . . . . . . . . . . . . 880 . . . . . . . . . . . . . . . . . . . . . 881 . . . . . . . . . . . . . . . . . . . . . 882 . . . . . . . . . . . . . . . . . . . . . 882 . . . . . . . . . . . . . . . . . . . . . 883 . . . . . . . . . . . . . . . . . . . . . 883 . . . . . . . . . . . . . . . . . . . . . 883 . . . . . . . . . . . . . . . . . . . . . 883 . . . . . . . . . . . . . . . . . . . . . 884 . . . . . . . . . . . . . . . . . . . . . 884 . . . . . . . . . . . . . . . . . . . . . 884
588
Program examples for CAF . . . . . . . . . Sample JCL for using CAF . . . . . . . . Sample assembler code for using CAF . . . . Loading and deleting the CAF language interface Connecting to DB2 for CAF . . . . . . . Checking return codes and reason codes for CAF Using dummy entry point DSNHLI for CAF . . Variable declarations for CAF . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
| |
Chapter 31. Programming for the Resource Recovery Services attachment RRSAF capabilities and requirements . . . . . . . . . . . . . . RRSAF capabilities . . . . . . . . . . . . . . . . . . . Task capabilities . . . . . . . . . . . . . . . . . . . Programming language . . . . . . . . . . . . . . . . . Tracing facility . . . . . . . . . . . . . . . . . . . . Program preparation . . . . . . . . . . . . . . . . . . RRSAF requirements . . . . . . . . . . . . . . . . . . . Program size . . . . . . . . . . . . . . . . . . . . Use of LOAD . . . . . . . . . . . . . . . . . . . . Commit and rollback operations . . . . . . . . . . . . . . Run environment . . . . . . . . . . . . . . . . . . . How to use RRSAF . . . . . . . . . . . . . . . . . . . . Summary of connection functions . . . . . . . . . . . . . . Implicit connections . . . . . . . . . . . . . . . . . . . Accessing the RRSAF language interface . . . . . . . . . . . . Explicit Load of DSNRLI . . . . . . . . . . . . . . . . Link-editing DSNRLI . . . . . . . . . . . . . . . . . . General properties of RRSAF connections . . . . . . . . . . . . Task termination . . . . . . . . . . . . . . . . . . . DB2 abend . . . . . . . . . . . . . . . . . . . . . Summary of RRSAF behavior . . . . . . . . . . . . . . . . RRSAF function descriptions . . . . . . . . . . . . . . . . . Register conventions . . . . . . . . . . . . . . . . . . . Parameter conventions for function calls . . . . . . . . . . . . IDENTIFY: Syntax and usage . . . . . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . SWITCH TO: Syntax and usage . . . . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . SIGNON: Syntax and usage . . . . . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . AUTH SIGNON: Syntax and usage . . . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . CONTEXT SIGNON: Syntax and usage. . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . SET_ID: Syntax and usage . . . . . . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . SET_CLIENT_ID: Syntax and usage . . . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . CREATE THREAD: Syntax and usage . . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . TERMINATE THREAD: Syntax and usage . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . TERMINATE IDENTIFY: Syntax and usage . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . TRANSLATE: Syntax and usage . . . . . . . . . . . . . . . Usage . . . . . . . . . . . . . . . . . . . . . . . RRSAF connection examples . . . . . . . . . . . . . . . . . Example of a single task . . . . . . . . . . . . . . . . . . Example of multiple tasks . . . . . . . . . . . . . . . . . Example of calling SIGNON to reuse a DB2 thread . . . . . . . . . Example of switching DB2 threads between tasks . . . . . . . . .
facility . . . . . . . . . 893 . . . . . . . . . . . . 893 . . . . . . . . . . . . 893 . . . . . . . . . . . . 894 . . . . . . . . . . . . 894 . . . . . . . . . . . . 894 . . . . . . . . . . . . 894 . . . . . . . . . . . . 894 . . . . . . . . . . . . 895 . . . . . . . . . . . . 895 . . . . . . . . . . . . 895 . . . . . . . . . . . . 895 . . . . . . . . . . . . 896 . . . . . . . . . . . . 896 . . . . . . . . . . . . 897 . . . . . . . . . . . . 898 . . . . . . . . . . . . 900 . . . . . . . . . . . . 900 . . . . . . . . . . . . 900 . . . . . . . . . . . . 901 . . . . . . . . . . . . 901 . . . . . . . . . . . . 902 . . . . . . . . . . . . 903 . . . . . . . . . . . . 903 . . . . . . . . . . . . 904 . . . . . . . . . . . . 904 . . . . . . . . . . . . 906 . . . . . . . . . . . . 907 . . . . . . . . . . . . 908 . . . . . . . . . . . . 909 . . . . . . . . . . . . 913 . . . . . . . . . . . . 914 . . . . . . . . . . . . 917 . . . . . . . . . . . . 918 . . . . . . . . . . . . 921 . . . . . . . . . . . . 922 . . . . . . . . . . . . 923 . . . . . . . . . . . . 923 . . . . . . . . . . . . 925 . . . . . . . . . . . . 926 . . . . . . . . . . . . 927 . . . . . . . . . . . . 928 . . . . . . . . . . . . 929 . . . . . . . . . . . . 930 . . . . . . . . . . . . 930 . . . . . . . . . . . . 931 . . . . . . . . . . . . 932 . . . . . . . . . . . . 932 . . . . . . . . . . . . 932 . . . . . . . . . . . . 933 . . . . . . . . . . . . 933 . . . . . . . . . . . . 933
589
RRSAF return codes and reason codes . . . . . . Program examples for RRSAF . . . . . . . . . Sample JCL for using RRSAF . . . . . . . . Loading and deleting the RRSAF language interface Using dummy entry point DSNHLI for RRSAF . . Connecting to DB2 for RRSAF . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Chapter 32. CICS-specific programming techniques . . Controlling the CICS attachment facility from an application Improving thread reuse . . . . . . . . . . . . . Detecting whether the CICS attachment facility is operational
. . . . . . . . . . . . . . . . . . 939 . . . . . . . . . . . . . . . . . . 939 . . . . . . . . . . . . . . . . . . 939 . . . . . . . . . . . . . . . . . . 939 . . . . . . . . . . . 941 . . . . . . . . . . . 941 . . . . . . . . . . . 941 . . . . . . . . . . . 942 . . . . . . . . . . . 943 . . . . . . . . . . . 944 . . . . . . . . . . . 948 . . . . . . . . . . . 948 . . . . . . . . . . . 949 . . . . . . . . . . . 949 . . . . . . . . . . . 958 . . . . . . . . . . . 959 . . . . . . . . . . . 959 . . . . . . . . . . . 960 . . . . . . . . . . . 961 . . . . . . . . . . . 962 . . . . . . . . . . . 966 . . . . . . . . . . . 967 . . . . . . . . . . . 968 . . . . . . . . . . . 968 . . . . . . . . . . . 969 . . . . . . . . . . . 971 . . . . . . . . . . . 972 . . . . . . . . . . . 973 . . . . . . . . . . . 974
| | | # | # | | | # # | | | | | # # # # # # # # #
Chapter 33. WebSphere MQ with DB2 . . . . . . . . . . . . . . WebSphere MQ messages . . . . . . . . . . . . . . . . . . . WebSphere MQ message handling . . . . . . . . . . . . . . . WebSphere MQ message handling with the MQI . . . . . . . . . WebSphere MQ message handling with the AMI . . . . . . . . . WebSphere MQ functions and stored procedures . . . . . . . . . . . Commit environment for AMI-based DB2 MQ functions and stored procedures Single-phase commit in WebSphere MQ . . . . . . . . . . . . Two-phase commit in WebSphere MQ . . . . . . . . . . . . . DB2 MQ tables . . . . . . . . . . . . . . . . . . . . . . Converting applications to use the MQI functions . . . . . . . . . . How to use WebSphere MQ functions . . . . . . . . . . . . . . Basic messaging . . . . . . . . . . . . . . . . . . . . Sending messages with WebSphere MQ . . . . . . . . . . . . Retrieving messages . . . . . . . . . . . . . . . . . . . Application-to-application connectivity . . . . . . . . . . . . Asynchronous messaging in DB2 UDB for z/OS and OS/390 . . . . . . . MQListener in DB2 for OS/390 and z/OS . . . . . . . . . . . . . Configuring and running MQListener in DB2 UDB for OS/390 and z/OS . . Configuring MQListener to run in the DB2 environment . . . . . . . Configuring Websphere MQ for MQListener . . . . . . . . . . . Configuring MQListener tasks . . . . . . . . . . . . . . . . . MQListener error processing . . . . . . . . . . . . . . . . . Creating a sample stored procedure to use with MQListener . . . . . . MQListener examples . . . . . . . . . . . . . . . . . . .
Chapter 34. Using DB2 as a web services consumer and provider . . . . . . . . . . . . . . . 977 DB2 as a web services consumer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 977 The SOAPHTTPV and SOAPHTTPC user-defined functions . . . . . . . . . . . . . . . . . . 977 The SOAPHTTPNV and SOAPHTTPNC user-defined functions . . . . . . . . . . . . . . . . 978 # SQLSTATEs for DB2 as a web services consumer . . . . . . . . . . . . . . . . . . . . . 979 DB2 as a web services provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980 Chapter 35. Programming techniques: Questions and answers . . . . . . . . . . . . . . . . 983 Providing a unique key for a table . . . . . . . . . . . . . . . . . . . . . . . . . . . 983 Scrolling through previously retrieved data . . . . . . . . . . . . . . . . . . . . . . . . 983 Using a scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983 Using a ROWID or identity column . . . . . . . . . . . . . . . . . . . . . . . . . . 984 Scrolling through a table in any direction . . . . . . . . . . . . . . . . . . . . . . . . . 985 Updating data as it is retrieved from the database . . . . . . . . . . . . . . . . . . . . . . 986 Updating previously retrieved data . . . . . . . . . . . . . . . . . . . . . . . . . . . 986 Updating thousands of rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986 Retrieving thousands of rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987 Using SELECT * . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987 Optimizing retrieval for a small set of rows . . . . . . . . . . . . . . . . . . . . . . . . 987 Adding data to the end of a table . . . . . . . . . . . . . . . . . . . . . . . . . . . 988 Translating requests from end users into SQL statements . . . . . . . . . . . . . . . . . . . . 988 Changing the table definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988 Storing data that does not have a tabular format . . . . . . . . . . . . . . . . . . . . . . 989
590
. 989
591
592
593
(Other declarations) READ CARDIN RECORD INTO IOAREA . AT END MOVE N TO INPUT-SWITCH. . . (Other COBOL statements) EXEC SQL UPDATE DSN8810.EMP SET SALARY = :NEW-SALARY WHERE EMPNO = :EMPID END-EXEC.
The statement (UPDATE) does not change, nor does its basic structure, but the input can change the results of the UPDATE statement.
594
3. 4. 5. 6.
Obtains, for SELECT statements, enough main storage to contain retrieved data Executes the statement or fetches the rows of data Processes the information returned Handles SQL return codes.
| | |
| |
595
use the dynamic statement cache to decrease the number of times that those dynamic statements must be prepared. See Performance of static and dynamic SQL on page 595 for more information. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dynamic SQL statements with input host variables: When you bind applications that contain dynamic SQL statements with input host variables, use either the REOPT(ALWAYS) or REOPT(ONCE) option. Use REOPT(ALWAYS) when you are not using the dynamic statement cache. DB2 determines the access path for statements at each EXECUTE or OPEN of the statement. This ensure the best access path for a statement, but using REOPT(ALWAYS) can increase the cost of frequently used dynamic SQL statements. Use REOPT(ONCE) when you are using the dynamic statements cache. DB2 determines and the access path for statements only at the first EXECUTE or OPEN of the statement. It saves that access path in the dynamic statement cache and uses it until the statement is invalidated or removed from the cache. This reuse of the access path reduces the cost of frequently used dynamic SQL statements that contain input host variables. You should code your PREPARE statements to minimize overhead. With both REOPT(ALWAYS) and REOPT(ONCE), DB2 prepares an SQL statement at the same time as it processes OPEN or EXECUTE for the statement. That is, DB2 processes the statement as if you specify DEFER(PREPARE). However, in the following cases, DB2 prepares the statement twice: v If you execute the DESCRIBE statement before the PREPARE statement in your program v If you use the PREPARE statement with the INTO parameter For the first prepare, DB2 determines the access path without using input variable values. For the second prepare, DB2 uses the input variable values to determine the access path. This extra prepare can decrease performance. If you specify REOPT(ALWAYS), DB2 prepares the statement twice each time it is run. If you specify REOPT(ONCE), DB2 prepares the statement twice only when the statement has never been saved in the cache. If the statement has been prepared and saved in the cache, DB2 will use the saved version of the statement to complete the DESCRIBE statement. For a statement that uses a cursor, you can avoid the double prepare by placing the DESCRIBE statement after the OPEN statement in your program. If you use predictive governing, and a dynamic SQL statement that is bound with either REOPT(ALWAYS) or REOPT(ONCE) exceeds a predictive governing warning threshold, your application does not receive a warning SQLCODE. However, it will receive an error SQLCODE from the OPEN or EXECUTE statement.
| | |
596
again. For a SELECT statement, the ability to declare a cursor WITH HOLD provides some relief but requires that the cursor be open at the commit point. WITH HOLD also causes some locks to be held for any objects that the prepared statement is dependent on. Also, WITH HOLD offers no relief for SQL statements that are not SELECT statements. | | DB2 can save prepared dynamic statements in a cache. The cache is a dynamic statement cache pool that all application processes can use to save and retrieve prepared dynamic statements. After an SQL statement has been prepared and is automatically saved in the cache, subsequent prepare requests for that same SQL statement can avoid the costly preparation process by using the statement that is in the cache. Statements that are saved in the cache can be shared among different threads, plans, or packages. Example: Assume that your application program contains a dynamic SQL statement, STMT1, which is prepared and executed multiple times. If you are using the dynamic statement cache when STMT1 is prepared for the first time, it is placed in the cache. When your application program encounters the identical PREPARE statement for STMT1, DB2 uses the already prepared STMT1 that is saved in the dynamic statement cache. The following example shows the identical STMT1 that might appear in your application program:
PREPARE EXECUTE COMMIT . . . PREPARE EXECUTE COMMIT . . . STMT1 FROM ... STMT1 Statement is prepared and the prepared statement is put in the cache.
| | | | | | |
Identical statement. DB2 uses the prepared statement from the cache.
Eligible statements: The following SQL statements can be saved in the cache: SELECT UPDATE INSERT DELETE Distributed and local SQL statements are eligible to be saved. Prepared, dynamic statements that use DB2 private protocol access are also eligible to be saved. Restrictions: Even though static statements that use DB2 private protocol access are dynamic at the remote site, those statements can not be saved in the cache. | | | | Statements in plans or packages that are bound with REOPT(ALWAYS) can not be saved in the cache. Statements in plans and packages that are bound with REOPT(ONCE) can be saved in the cache. See How bind options REOPT(ALWAYS) and REOPT(ONCE) affect dynamic SQL on page 625 for more information about REOPT(ALWAYS) and REOPT(ONCE). Prepared statements cannot be shared among data sharing members. Because each member has its own EDM pool, a cached statement on one member is not available to an application that runs on another member.
597
In this case, DB2 can use P1 instead of preparing S2. However, assume that S1 is specified as follows:
UPDATE EMP SET SALARY=SALARY+50
In this case, DB2 cannot use P1 for S2. DB2 prepares S2 and saves the prepared version of S2 in the cache. v The authorization ID that was used to prepare S1 must be used to prepare S2: When a plan or package has run behavior, the authorization ID is the current SQLID value. For secondary authorization IDs: - The application process that searches the cache must have the same secondary authorization ID list as the process that inserted the entry into the cache or must have a superset of that list. - If the process that originally prepared the statement and inserted it into the cache used one of the privileges held by the primary authorization ID to accomplish the prepare, that ID must either be part of the secondary authorization ID list of the process searching the cache, or it must be the primary authorization ID of that process. When a plan or package has bind behavior, the authorization ID is the plan owners ID. For a DDF server thread, the authorization ID is the package owners ID. When a package has define behavior, then the authorization ID is the user-defined function or stored procedure owner. When a package has invoke behavior, then the authorization ID is the authorization ID under which the statement that invoked the user-defined function or stored procedure executed. For an explanation of bind, run, define, and invoke behavior, see Using DYNAMICRULES to specify behavior of dynamic SQL statements on page 502. v When the plan or package that contains S2 is bound, the values of these bind options must be the same as when the plan or package that contains S1 was bound: CURRENTDATA
598
| | # # # # # # # # # # # # # # # # # # # #
DYNAMICRULES ISOLATION SQLRULES QUALIFIER v When S2 is prepared, the values of the following special registers must be the same as when S1 was prepared: CURRENT DEGREE CURRENT RULES CURRENT PRECISION CURRENT REFRESH AGE CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION Exception: If you set the CACHEDYN_FREELOCAL subsystem parameter to 1 and a storage shortage occurs, DB2 frees the cached dynamic statements. In this case, DB2 cannot use P1 instead of preparing statement S2, because P1 no longer exists in the statement cache.
STMT1
STMT1
Figure 179. Writing dynamic SQL to use the bind option KEEPDYNAMIC(YES)
599
To understand how the KEEPDYNAMIC bind option works, you need to differentiate between the executable form of a dynamic SQL statement, which is the prepared statement, and the character string form of the statement, which is the statement string. Relationship between KEEPDYNAMIC(YES) and statement caching: When the dynamic statement cache is not active, and you run an application bound with KEEPDYNAMIC(YES), DB2 saves only the statement string for a prepared statement after a commit operation. On a subsequent OPEN, EXECUTE, or DESCRIBE, DB2 must prepare the statement again before performing the requested operation. Figure 180 illustrates this concept.
PREPARE EXECUTE COMMIT . . . EXECUTE COMMIT . . . EXECUTE COMMIT STMT1 FROM ... STMT1 Statement is prepared and put in memory.
STMT1
Application does not issue PREPARE. DB2 prepares the statement again. Again, no PREPARE needed.
STMT1
Figure 180. Using KEEPDYNAMIC(YES) when the dynamic statement cache is not active
When the dynamic statement cache is active, and you run an application bound with KEEPDYNAMIC(YES), DB2 retains a copy of both the prepared statement and the statement string. The prepared statement is cached locally for the application process. In general, the statement is globally cached in the EDM pool, to benefit other application processes. If the application issues an OPEN, EXECUTE, or DESCRIBE after a commit operation, the application process uses its local copy of the prepared statement to avoid a prepare and a search of the cache. Figure 181 illustrates this process.
PREPARE EXECUTE COMMIT . . . EXECUTE COMMIT . . . EXECUTE COMMIT . . . PREPARE STMT1 FROM ... STMT1 Statement is prepared and put in memory.
STMT1
Application does not issue PREPARE. DB2 uses the prepared statement in memory. Again, no PREPARE needed. DB2 uses the prepared statement in memory. Statement is prepared and put in memory.
STMT1
Figure 181. Using KEEPDYNAMIC(YES) when the dynamic statement cache is active
The local instance of the prepared SQL statement is kept in ssnmDBM1 storage until one of the following occurs: v The application process ends. v A rollback operation occurs. v The application issues an explicit PREPARE statement with the same statement name. If the application does issue a PREPARE for the same SQL statement name that has a kept dynamic statement associated with it, the kept statement is discarded and DB2 prepares the new statement.
600
v The statement is removed from memory because the statement has not been used recently, and the number of kept dynamic SQL statements reaches the subsystem default as set during installation. Handling implicit prepare errors: If a statement is needed during the lifetime of an application process, and the statement has been removed from the local cache, DB2 might be able to retrieve it from the global cache. If the statement is not in the global cache, DB2 must implicitly prepare the statement again. The application does not need to issue a PREPARE statement. However, if the application issues an OPEN, EXECUTE, or DESCRIBE for the statement, the application must be able to handle the possibility that DB2 is doing the prepare implicitly. Any error that occurs during this prepare is returned on the OPEN, EXECUTE, or DESCRIBE. How KEEPDYNAMIC affects applications that use distributed data: If a requester does not issue a PREPARE after a COMMIT, the package at the DB2 UDB for z/OS server must be bound with KEEPDYNAMIC(YES). If both requester and server are DB2 UDB for z/OS subsystems, the DB2 requester assumes that the KEEPDYNAMIC value for the package at the server is the same as the value for the plan at the requester. The KEEPDYNAMIC option has performance implications for DRDA clients that specify WITH HOLD on their cursors: v If KEEPDYNAMIC(NO) is specified, a separate network message is required when the DRDA client issues the SQL CLOSE for the cursor. v If KEEPDYNAMIC(YES) is specified, the DB2 UDB for z/OS server automatically closes the cursor when SQLCODE +100 is detected, which means that the client does not have to send a separate message to close the held cursor. This reduces network traffic for DRDA applications that use held cursors. It also reduces the duration of locks that are associated with the held cursor. Using RELEASE(DEALLOCATE) with KEEPDYNAMIC(YES): See 409 for information about interactions between bind options RELEASE(DEALLOCATE) and KEEPDYNAMIC(YES). Considerations for data sharing: If one member of a data sharing group has enabled the cache but another has not, and an application is bound with KEEPDYNAMIC(YES), DB2 must implicitly prepare the statement again if the statement is assigned to a member without the cache. This can mean a slight reduction in performance.
601
statement receives an error SQL code. If the statement exceeds a predictive governing limit, it receives a warning or error SQL code. Writing an application to handle predictive governing explains more about predictive governing SQL codes. Your system administrator can establish the limits for individual plans or packages, for individual users, or for all users who do not have personal limits. Follow the procedures defined by your location for adding, dropping, or modifying entries in the resource limit specification table. For more information about the resource limit specification tables, see Part 5 (Volume 2) of DB2 Administration Guide.
602
603
Your program must take the following steps: 1. Include an SQLCA. The requirements for an SQL communications area (SQLCA) are the same as for static SQL statements. For REXX, DB2 includes the SQLCA automatically. 2. Load the input SQL statement into a data area. The procedure for building or reading the input SQL statement is not discussed here; the statement depends on your environment and sources of information. You can read in complete SQL statements, or you can get information to build the statement from data sets, a user at a terminal, previously set program variables, or tables in the database. If you attempt to execute an SQL statement dynamically that DB2 does not allow, you get an SQL error. 3. Execute the statement. You can use either of these methods: v Dynamic execution using EXECUTE IMMEDIATE v Dynamic execution using PREPARE and EXECUTE on page 605. 4. Handle any errors that might result. The requirements are the same as those for static SQL statements. The return code from the most recently executed SQL statement appears in the host variables SQLCODE and SQLSTATE or corresponding fields of the SQLCA. See Checking the execution of SQL statements on page 91 for information about the SQLCA and the fields it contains.
After reading a statement, the program is to run it immediately. Recall that you must prepare (precompile and bind) static SQL statements before you can use them. You cannot prepare dynamic SQL statements in advance. The SQL statement EXECUTE IMMEDIATE causes an SQL statement to prepare and execute, dynamically, at run time.
604
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example: Using a varying-length character host variable: This excerpt is from a C program that reads a DELETE statement into the host variable dstring and executes the statement:
EXEC SQL BEGIN DECLARE SECTION; ... struct VARCHAR { short len; char s[40]; } dstring; EXEC SQL END DECLARE SECTION; ... /* Read a DELETE statement into the host variable dstring. */ gets(dstring); EXEC SQL EXECUTE IMMEDIATE :dstring; ...
EXECUTE IMMEDIATE causes the DELETE statement to be prepared and executed immediately. Declaring a CLOB or DBLOB host variable: You declare CLOB and DBCLOB host variables according to the rules described in Declaring LOB host variables and LOB locators on page 300. The precompiler generates a structure that contains two elements, a 4-byte length field and a data field of the specified length. The names of these fields vary depending on the host language: v In PL/I, assembler, and Fortran, the names are variable_LENGTH and variable_DATA. v In COBOL, the names are variableLENGTH and variableDATA. v In C, the names are variable.LENGTH and variable.DATA. Example: Using a CLOB host variable: This excerpt is from a C program that copies an UPDATE statement into the host variable string1 and executes the statement:
EXEC SQL BEGIN DECLARE SECTION; ... SQL TYPE IS CLOB(4k) string1; EXEC SQL END DECLARE SECTION; ... /* Copy a statement into the host variable string1. */ strcpy(string1.data, "UPDATE DSN8610.EMP SET SALARY = SALARY * 1.1"); string1.length = 44; EXEC SQL EXECUTE IMMEDIATE :string1; ...
EXECUTE IMMEDIATE causes the UPDATE statement to be prepared and executed immediately.
605
The loop repeats until it reads an EMP value of 0. If you know in advance that you will use only the DELETE statement and only the table DSN8810.EMP, you can use the more efficient static SQL. Suppose further that several different tables have rows that are identified by employee numbers, and that users enter a table name as well as a list of employee numbers to delete. Although variables can represent the employee numbers, they cannot represent the table name, so you must construct and execute the entire statement dynamically. Your program must now do these things differently: v Use parameter markers instead of host variables v Use the PREPARE statement v Use EXECUTE instead of EXECUTE IMMEDIATE
You associate host variable :EMP with the parameter marker when you execute the prepared statement. Suppose that S1 is the prepared statement. Then the EXECUTE statement looks like this:
EXECUTE S1 USING :EMP;
606
Example using the PREPARE statement: Assume that the character host variable :DSTRING has the value DELETE FROM DSN8810.EMP WHERE EMPNO = ?. To prepare an SQL statement from that string and assign it the name S1, write:
EXEC SQL PREPARE S1 FROM :DSTRING;
The prepared statement still contains a parameter marker, for which you must supply a value when the statement executes. After the statement is prepared, the table name is fixed, but the parameter marker allows you to execute the same statement many times with different values of the employee number.
You can now write an equivalent example for a dynamic SQL statement:
< Read a statement containing parameter markers into DSTRING.> EXEC SQL PREPARE S1 FROM :DSTRING; < Read a value for EMP from the list. > DO UNTIL (EMPNO = 0); EXEC SQL EXECUTE S1 USING :EMP; < Read a value for EMP from the list. > END;
The PREPARE statement prepares the SQL statement and calls it S1. The EXECUTE statement executes S1 repeatedly, using different values for EMP.
607
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
You might be entering the rows of data into different tables or entering different numbers of rows, so you want to construct the INSERT statement dynamically. This section describes the following methods to execute a multiple-row INSERT statement dynamically: v By using host variable arrays that contain the data to be inserted v By using a descriptor to describe the host variable arrays that contain the data
You must specify the FOR n ROWS clause on the EXECUTE statement. Preparing and executing the statement: The code to prepare and execute the INSERT statement looks like this:
/* Copy the INSERT string into the host variable sqlstmt */ strcpy(sqlstmt, "INSERT INTO DSN8810.ACT VALUES (CAST(? AS SMALLINT),"); strcat(sqlstmt, " CAST(? AS CHAR(6)), CAST(? AS VARCHAR(20)))"); /* Copy the INSERT attributes into the host variable attrvar */ strcpy(attrvar, "FOR MULTIPLE ROWS"); /* Prepare and execute my_insert using the host variable arrays */ EXEC SQL PREPARE my_insert ATTRIBUTES :attrvar FROM :sqlstmt; EXEC SQL EXECUTE my_insert USING :hva1, :hva2, :hva3 FOR :num_rows ROWS;
Each host variable in the USING clause of the EXECUTE statement represents an array of values for the corresponding column of the target of the INSERT statement. You can vary the number of rows, specified by num_rows in the example, without needing to prepare the INSERT statement again.
You must specify the FOR n ROWS clause on the EXECUTE statement. Setting the fields in the SQLDA: Assume that your program includes the standard SQLDA structure declaration and declarations for the program variables that point to the SQLDA structure. Before the INSERT statement is prepared and executed, you must set the fields in the SQLDA structure for your INSERT statement. For C application programs, the code to set the fields looks like this:
608
# # # # # # # # # # # # # # # # # # # # # # | | | | | | | | | | | | | | | # # # # # | | | | | | | | | | | |
strcpy(sqldaptr->sqldaid,"SQLDA"); sqldaptr->sqldabc = 192; /* number of bytes of storage allocated for the SQLDA */ sqldaptr->sqln = 4; /* number of SQLVAR occurrences */ sqldaptr->sqld = 4; varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0])); /* Point to first SQLVAR */ varptr->sqltype = 500; /* data type SMALLINT */ varptr->sqllen = 2; varptr->sqldata = (char *) hva1; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length); varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 1); /* Point to next SQLVAR */ varptr->sqltype = 452; /* data type CHAR(6) */ varptr->sqllen = 6; varptr->sqldata = (char *) hva2; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length); varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 2); /* Point to next SQLVAR */ varptr->sqltype = 448; /* data type VARCHAR(20) */ varptr->sqllen = 20; varptr->sqldata = (char *) hva3; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length);
The SQLDA structure has these fields: v SQLDABC indicates the number of bytes of storage that are allocated for the SQLDA. The storage includes a 16-byte header and 44 bytes for each SQLVAR field. The value is SQLN x 44 + 16, or 192 for this example. v SQLN is the number of SQLVAR occurrences, plus one for use by DB2 for the host variable that contains the number n in the FOR n ROWS clause. v SQLD is the number of variables in the SQLDA that are used by DB2 when processing the INSERT statement. v An SQLVAR occurrence specifies the attributes of an element of a host variable array that corresponds to a value provided for a target column of the INSERT. Within each SQLVAR: SQLTYPE indicates the data type of the elements of the host variable array. SQLLEN indicates the length of a single element of the host variable array. SQLDATA points to the corresponding host variable array. Assume that your program allocates the dynamic variable arrays hva1, hva2, and hva3. SQLNAME has two parts: the LENGTH and the DATA. The LENGTH is 8. The first two bytes of the DATA field is X0000. Bytes 5 and 6 of the DATA field are a flag indicating whether the variable is an array or a FOR n ROWS value. Bytes 7 and 8 are a two-byte binary integer representation of the dimension of the array.
For more information about the SQLDA, see Dynamic SQL for varying-list SELECT statements on page 613. For a complete layout of the SQLDA and the descriptions given by the INCLUDE statement, see Appendix E of DB2 SQL Reference. Preparing and executing the statement: The code to prepare and execute the INSERT statement looks like this:
/* Copy the INSERT string into the host variable sqlstmt */ strcpy(sqlstmt, "INSERT INTO DSN8810.ACT VALUES (?, ?, ?)"); /* Copy the INSERT attributes into the host variable attrvar */ strcpy(attrvar, "FOR MULTIPLE ROWS");
609
| | | | |
/* Prepare and execute my_insert using the descriptor */ EXEC SQL PREPARE my_insert ATTRIBUTES :attrvar FROM :sqlstmt; EXEC SQL EXECUTE my_insert USING DESCRIPTOR :*sqldaptr FOR :num_rows ROWS;
The host variable in the USING clause of the EXECUTE statement names the SQLDA that describes the parameter markers in the INSERT statement.
The code to set up an SQLDA, obtain parameter information using DESCRIBE INPUT, and execute the statement looks like this:
SQLDAPTR=ADDR(INSQLDA); /* Get pointer to SQLDA SQLDAID=SQLDA; /* Fill in SQLDA eye-catcher SQLDABC=LENGTH(INSQLDA); /* Fill in SQLDA length SQLN=1; /* Fill in number of SQLVARs SQLD=0; /* Initialize # of SQLVARs used DO IX=1 TO SQLN; /* Initialize the SQLVAR SQLTYPE(IX)=0; SQLLEN(IX)=0; SQLNAME(IX)=; END; SQLSTMT=DELETE FROM DSN8810.EMP WHERE EMPNO = ?; EXEC SQL PREPARE SQLOBJ FROM SQLSTMT; EXEC SQL DESCRIBE INPUT SQLOBJ INTO :INSQLDA; SQLDATA(1)=ADDR(HVEMP); /* Get input data address SQLIND(1)=ADDR(HVEMPIND); /* Get indicator address EXEC SQL EXECUTE SQLOBJ USING DESCRIPTOR :INSQLDA; */ */ */ */ */ */
*/ */
610
number of values as the last, and the values have the same data types each time. Therefore, you can specify host variables as you do for static SQL. An advantage of the fixed-list SELECT is that you can write it in any of the programming languages that DB2 supports. Varying-list dynamic SELECT statements require assembler, C, PL/I, and COBOL. For a sample program that is written in C and that illustrates dynamic SQL with fixed-list SELECT statements, see Figure 279 on page 1044. To execute a fixed-list SELECT statement dynamically, your program must: 1. Include an SQLCA. 2. Load the input SQL statement into a data area. The preceding two steps are exactly the same as described under Dynamic SQL for non-SELECT statements on page 603. 3. Declare a cursor for the statement name as described in Declaring a cursor for the statement name. 4. Prepare the statement, as described in Preparing the statement. 5. Open the cursor, as described in Opening the cursor on page 612. 6. Fetch rows from the result table, as described in Fetching rows from the result table on page 612. 7. Close the cursor, as described in Closing the cursor on page 612. 8. Handle any resulting errors. This step is the same as for static SQL, except for the number and types of errors that can result. Example: Suppose that your program retrieves last names and phone numbers by dynamically executing SELECT statements of this form:
SELECT LASTNAME, PHONENO FROM DSN8810.EMP WHERE ... ;
The program reads the statements from a terminal, and the user determines the WHERE clause. As with non-SELECT statements, your program puts the statements into a varying-length character variable; call it DSTRING. Eventually you prepare a statement from DSTRING, but first you must declare a cursor for the statement and give it a name.
611
ATTRVAR contains attributes that you want to add to the SELECT statement, such as FETCH FIRST 10 ROWS ONLY or OPTIMIZE for 1 ROW. In general, if the SELECT statement has attributes that conflict with the attributes in the PREPARE statement, the attributes on the SELECT statement take precedence over the attributes on the PREPARE statement. However, in this example, the SELECT statement in DSTRING has no attributes specified, so DB2 uses the attributes in ATTRVAR for the SELECT statement. As with non-SELECT statements, the fixed-list SELECT could contain parameter markers. However, this example does not need them. To execute STMT, your program must open the cursor, fetch rows from the result table, and close the cursor. The following sections describe how to do those steps.
If STMT contains parameter markers, you must use the USING clause of OPEN to provide values for all of the parameter markers in STMT. Example: If four parameter markers are in STMT, you need the following statement:
EXEC SQL OPEN C1 USING :PARM1, :PARM2, :PARM3, :PARM4;
The key feature of this statement is the use of a list of host variables to receive the values returned by FETCH. The list has a known number of items (in this case, two items, :NAME and :PHONE) of known data types (both are character strings, of lengths 15 and 4, respectively). You can use this list in the FETCH statement only because you planned the program to use only fixed-list SELECTs. Every row that cursor C1 points to must contain exactly two character values of appropriate length. If the program is to handle anything else, it must use the techniques described under Dynamic SQL for varying-list SELECT statements on page 613.
612
Additional complications exist for statements with parameter markers. 4. Handle any errors that might result.
613
Now, the program must find out whether the statement is a SELECT. If it is, the program must also find out how many values are in each row, and what their data types are. The information comes from an SQL descriptor area (SQLDA).
You cannot include an SQLDA in a Fortran or REXX program. For a complete layout of the SQLDA and the descriptions given by INCLUDE statements, see Appendix E of DB2 SQL Reference.
An SQLDA with n occurrences of SQLVAR is referred to as a single SQLDA, an SQLDA with 2*n occurrences of SQLVAR a double SQLDA, an SQLDA with 3*n occurrences of SQLVAR a triple SQLDA. A program that admits SQL statements of every kind for dynamic execution has two choices: v Provide the largest SQLDA that it could ever need. The maximum number of columns in a result table is 750, so an SQLDA for 750 columns occupies 33 016
614
bytes for a single SQLDA, 66 016 bytes for a double SQLDA, or 99 016 bytes for a triple SQLDA. Most SELECT statements do not retrieve 750 columns, so the program does not usually use most of that space. v Provide a smaller SQLDA, with fewer occurrences of SQLVAR. From this the program can find out whether the statement was a SELECT and, if it was, how many columns are in its result table. If more columns are in the result than the SQLDA can hold, DB2 returns no descriptions. When this happens, the program must acquire storage for a second SQLDA that is long enough to hold the column descriptions, and ask DB2 for the descriptions again. Although this technique is more complicated to program than the first, it is more general. How many columns should you allow? You must choose a number that is large enough for most of your SELECT statements, but not too wasteful of space; 40 is a good compromise. To illustrate what you must do for statements that return more columns than allowed, the example in this discussion uses an SQLDA that is allocated for at least 100 columns.
Equivalently, you can use the INTO clause in the PREPARE statement:
EXEC SQL PREPARE STMT INTO :MINSQLDA FROM :DSTRING;
Do not use the USING clause in either of these examples. At the moment, only the minimum SQLDA is in use. Figure 182 shows the contents of the minimum SQLDA in use.
Header
SQLDAID
SQLDABC
100
SQLD
615
Indicator variable address v Otherwise, if SQLN is less than the minimum number of SQLVARs specified in Table 75 on page 614, then DB2 returns no information in the SQLVARs. Regardless of whether your SQLDA is big enough, whenever you execute DESCRIBE, DB2 returns the following values, which you can use to build an SQLDA of the correct size: v SQLD is 0 if the SQL statement is not a SELECT. Otherwise, SQLD is the number of columns in the result table. The number of SQLVAR occurrences you need for the SELECT depends on the value in the seventh byte of SQLDAID. v The seventh byte of SQLDAID is 2 if each column in the result table requires two SQLVAR entries. The seventh byte of SQLDAID is 3 if each column in the result table requires three SQLVAR entries.
(If the statement does contain parameter markers, you must use an SQL descriptor area; for instructions, see Executing arbitrary statements with parameter markers on page 624.)
616
FULSQLDA has a fixed-length header of 16 bytes in length, followed by a varying-length section that consists of structures with the SQLVAR format. If the result table contains LOB columns or distinct type columns, a varying-length section that consists of structures with the SQLVAR2 format follows the structures with SQLVAR format. All SQLVAR structures and SQLVAR2 structures are 44 bytes long. See Appendix E ofDB2 SQL Reference for details about the two SQLVAR formats. The number of SQLVAR and SQLVAR2 elements you need is in the SQLD field of MINSQLDA, and the total length you need for FULSQLDA (16 + SQLD * 44) is in the SQLDABC field of MINSQLDA. Allocate that amount of storage.
After the DESCRIBE statement executes, each occurrence of SQLVAR in the full-size SQLDA (FULSQLDA in our example) contains a description of one column of the result table in five fields. If an SQLVAR occurrence describes a LOB column or distinct type column, the corresponding SQLVAR2 occurrence contains additional information specific to the LOB or distinct type. Figure 184 shows an SQLDA that describes two columns that are not LOB columns or distinct type columns. See Describing tables with LOB and distinct type columns on page 621 for an example of describing a result table with LOB columns or distinct type columns.
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes) 452 453
8816 0 0
200 8 7
617
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes) 452 453
8816 0 0
200 8 7
The first SQLVAR pertains to the first column of the result table (the WORKDEPT column). SQLVAR element 1 contains fixed-length character strings and does not allow null values (SQLTYPE=452); the length attribute is 3. For information about SQLTYPE values, see Appendix E of DB2 SQL Reference. Figure 186 shows the SQLDA after your program acquires storage for the column values and their indicators, and puts the addresses in the SQLDATA fields of the SQLDA.
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes) 452 453
8816
200
FLDA CHAR(3)
FLDB CHAR(4)
Figure 186. SQL descriptor area after analyzing descriptions and acquiring storage
Figure 187 shows the SQLDA after your program executes a FETCH statement.
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes) 452 453 SQLDA 3 4 Addr FLDA Addr FLDB 8816 200 200 8 7 WORKDEPT PHONENO
618
Table 76. Values inserted in the SQLDA (continued) Value 200 452 Field SQLD SQLTYPE Description The number of occurrences of SQLVAR actually used by the DESCRIBE statement The value of SQLTYPE in the first occurrence of SQLVAR. It indicates that the first column contains fixed-length character strings, and does not allow nulls. The length attribute of the column Bytes 3 and 4 contain the CCSID of a string column. Undefined for other types of columns.
The number of characters in the column name The column name of the first column
The initial value of this special register is the application encoding scheme that is determined by the BIND option.
Chapter 24. Coding dynamic SQL in application programs
619
v For static and dynamic SQL statements that use host variables and host variable arrays, use the DECLARE VARIABLE statement to associate CCSIDs with the host variables into which you retrieve the data. See Changing the coded character set ID of host variables on page 85 for information about this technique. v For static and dynamic SQL statements that use a descriptor, set the CCSID for the retrieved data in the SQLDA. The following text describes that technique. To change the encoding scheme for SQL statements that use a descriptor, set up the SQLDA, and then make these additional changes to the SQLDA: 1. Put the character + in the sixth byte of field SQLDAID. 2. For each SQLVAR entry: v Set the length field of SQLNAME to 8. v Set the first two bytes of the data field of SQLNAME to X'0000'. # # v Set the third and fourth bytes of the data field of SQLNAME to the CCSID, in hexadecimal, in which you want the results to display, or to X'0000'. X'0000' indicates that DB2 should use the default CCSID. If you specify a nonzero CCSID, it must meet one of the following conditions: A row in catalog table SYSSTRINGS has a matching value for OUTCCSID. The Unicode conversion services support conversion to that CCSID. See z/OS C/C++ Programming Guide for information about the conversions supported. If you are modifying the CCSID to retrieve the contents of an ASCII, EBCDIC, or Unicode table on a DB2 UDB for z/OS system, and you previously executed a DESCRIBE statement on the SELECT statement that you are using to retrieve the data, the SQLDATA fields in the SQLDA that you used for the DESCRIBE contain the ASCII or Unicode CCSID for that table. To set the data portion of the SQLNAME fields for the SELECT, move the contents of each SQLDATA field in the SQLDA from the DESCRIBE to each SQLNAME field in the SQLDA for the SELECT. If you are using the same SQLDA for the DESCRIBE and the SELECT, be sure to move the contents of the SQLDATA field to SQLNAME before you modify the SQLDATA field for the SELECT. For REXX, you set the CCSID in the stem.n.SQLUSECCSID field instead of setting the SQLDAID and SQLNAME fields. For example, suppose that the table that contains WORKDEPT and PHONENO is defined with CCSID ASCII. To retrieve data for columns WORKDEPT and PHONENO in ASCII CCSID 437 (X'01B5'), change the SQLDA as shown in Figure 188.
# #
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes)
8816
200
FLDA CHAR(3)
FLDB CHAR(4)
Figure 188. SQL descriptor area for retrieving data in ASCII CCSID 437
620
# # # # # # #
In this case, SQLNAME contains nothing for a column with no label. If you prefer to use labels wherever they exist, but column names where there are no labels, write USING ANY. (Some columns, such as those derived from functions or expressions, have neither name nor label; SQLNAME contains nothing for those columns. For example, if you use a UNION to combine two columns that do not have the same name and do not use a label, SQLNAME contains a string of length zero.) You can also write USING BOTH to obtain the name and the label when both exist. However, to obtain both, you need a second set of occurrences of SQLVAR in FULSQLDA. The first set contains descriptions of all the columns using names; the second set contains descriptions using labels. This means that you must allocate a longer SQLDA for the second DESCRIBE statement ((16 + SQLD * 88 bytes) instead of (16 + SQLD * 44)). You must also put double the number of columns (SLQD * 2) in the SQLN field of the second SQLDA. Otherwise, if not enough space is available, DESCRIBE does not enter descriptions of any of the columns.
The USER column cannot contain nulls and is of distinct type ID, defined like this:
CREATE DISTINCT TYPE SCHEMA1.ID AS CHAR(20);
The A_DOC column can contain nulls and is of type CLOB(1M). The result table for this statement has two columns, but you need four SQLVAR occurrences in your SQLDA because the result table contains a LOB type and a distinct type. Suppose that you prepare and describe this statement into FULSQLDA, which is large enough to hold four SQLVAR occurrences. FULSQLDA looks like Figure 189.
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes) SQLVAR2 element 1 (44 bytes) SQLVAR2 element 2 (44 bytes) 452 409
192 0 0
4 4 5 7
1 048 576
11
Figure 189. SQL descriptor area after describing a CLOB and distinct type
The next steps are the same as for result tables without LOBs or distinct types:
Chapter 24. Coding dynamic SQL in application programs
621
1. Analyze each SQLVAR description to determine the maximum amount of space you need for the column value. For a LOB type, retrieve the length from the SQLLONGL field instead of the SQLLEN field. 2. Derive the address of some storage area of the required size. For a LOB data type, you also need a 4-byte storage area for the length of the LOB data. You can allocate this 4-byte area at the beginning of the LOB data or in a different location. 3. Put this address in the SQLDATA field. For a LOB data type, if you allocated a separate area to hold the length of the LOB data, put the address of the length field in SQLDATAL. If the length field is at beginning of the LOB data area, put 0 in SQLDATAL. 4. If the SQLTYPE field indicates that the value can be null, the program must also put the address of an indicator variable in the SQLIND field. Figure 190 shows the contents of FULSQLDA after you fill in pointers to the storage locations.
Figure 190. SQL descriptor area after analyzing CLOB and distinct type descriptions and acquiring storage
Figure 191 on page 623 shows the contents of FULSQLDA after you execute a FETCH statement.
622
Figure 191. SQL descriptor area after executing FETCH on a table with CLOB and distinct type columns
For cases when there are parameter markers, see Executing arbitrary statements with parameter markers on page 624 below.
The key feature of this statement is the clause USING DESCRIPTOR :FULSQLDA. That clause names an SQL descriptor area in which the occurrences of SQLVAR point to other areas. Those other areas receive the values that FETCH returns. It is possible to use that clause only because you previously set up FULSQLDA to look like Figure 185 on page 618. Figure 187 on page 618 shows the result of the FETCH. The data areas identified in the SQLVAR fields receive the values from a single row of the result table. Successive executions of the same FETCH statement put values from successive rows of the result table into these same areas.
623
When COMMIT ends the unit of work containing OPEN, the statement in STMT reverts to the unprepared state. Unless you defined the cursor using the WITH HOLD option, you must prepare the statement again before you can reopen the cursor.
v If the SQL statement is SELECT, name a list of host variables in the OPEN statement:
WRONG: RIGHT: EXEC SQL OPEN C1; EXEC SQL OPEN C1 USING :VAR1, :VAR2, :VAR3;
In both cases, the number and types of host variables named must agree with the number of parameter markers in STMT and the types of parameter they represent. The first variable (VAR1 in the examples) must have the type expected for the first parameter marker in the statement, the second variable must have the type expected for the second marker, and so on. There must be at least as many variables as parameter markers.
624
You must fill in certain fields in DPARM before using EXECUTE or OPEN; you can ignore the other fields. Field SQLDAID Use when describing host variables for parameter markers The seventh byte indicates whether more than one SQLVAR entry is used for each parameter marker. If this byte is not blank, at least one parameter marker represents a distinct type or LOB value, so the SQLDA has more than one set of SQLVAR entries. You do not set this field for a REXX SQLDA. SQLDABC SQLN SQLD The length of the SQLDA, which is equal to SQLN * 44 + 16. You do not set this field for a REXX SQLDA. The number of occurrences of SQLVAR allocated for DPARM. You do not set this field for a REXX SQLDA. The number of occurrences of SQLVAR actually used. This number must not be less than the number of parameter markers. In each occurrence of SQLVAR, put information in the following fields: SQLTYPE, SQLLEN, SQLDATA, SQLIND. The code for the type of variable, and whether it allows nulls. The length of the host variable. The address of the host variable. For REXX, this field contains the value of the host variable. SQLIND The address of an indicator variable, if needed. For REXX, this field contains a negative number if the value in SQLDATA is null. SQLNAME Ignore.
| | | | |
625
PREPAREs can cause performance to degrade if your program contains many dynamic non-SELECT statements. To improve performance, consider putting the code that contains those statements in a separate package and then binding that package with the option REOPT(NONE). v If you execute a DESCRIBE statement before you open a cursor for that statement, DB2 prepares the statement twice. If, however, you execute a DESCRIBE statement after you open the cursor, DB2 prepares the statement only once. To improve the performance of a program bound with the option REOPT(ALWAYS), execute the DESCRIBE statement after you open the cursor. To prevent an automatic DESCRIBE before a cursor is opened, do not use a PREPARE statement with the INTO clause. v If you use predictive governing for applications bound with REOPT(ALWAYS), DB2 does not return a warning SQLCODE when dynamic SQL statements exceed the predictive governing warning threshold. DB2 does return an error SQLCODE when dynamic SQL statements exceed the predictive governing error threshold. DB2 returns the error SQLCODE for an EXECUTE or OPEN statement. When you specify the bind option REOPT(ONCE), DB2 optimizes the access path only once, at the first EXECUTE or OPEN, for SQL statements that contain host variables, parameter markers, or special registers. The option REOPT(ONCE) has the following effects on dynamic SQL statements: v When you specify the option REOPT(ONCE), DB2 automatically uses DEFER(PREPARE), which means that DB2 waits to prepare a statement until it encounters an OPEN or EXECUTE statement. v When DB2 prepares a statement using REOPT(ONCE), it saves the access path in the dynamic statement cache. This access path is used each time the statement is run, until the statement that is in the cache is invalidated (or removed from the cache) and needs to be rebound. v The DESCRIBE statement has the following effects on dynamic statements that are bound with REOPT(ONCE): When you execute a DESCRIBE statement before an EXECUTE statement on a non-SELECT statement, DB2 prepares the statement twice if it is not already saved in the cache: Once for the DESCRIBE statement and once for the EXECUTE statement. DB2 uses the values of the input variables only during the second time the statement is prepared. It then saves the statement in the cache. If you execute a DESCRIBE statement before an EXECUTE statement on a non-SELECT statement that has already been saved in the cache, DB2 prepares the non-SELECT statement only for the DESCRIBE statement. If you execute DESCRIBE on a statement before you open a cursor for that statement, DB2 always prepares the statement on DESCRIBE. However, DB2 will not prepare the statement again on OPEN if the statement has already been saved in the cache. If you execute DESCRIBE on a statement after you open a cursor for that statement, DB2 prepared the statement only once if it is not already saved in the cache. If the statement is already saved in the cache and you execute DESCRIBE after you open a cursor for that statement, DB2 does not prepare the statement, it used the statement that is saved in the cache. To improve the performance of a program that is bound with REOPT(ONCE), execute the DESCRIBE statement after you open a cursor. To prevent an automatic DESCRIBE before a cursor is opened, do not use a PREPARE statement with the INTO clause.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
626
| | | | | |
v If you use predictive governing for applications that are bound with REOPT(ONCE), DB2 does not return a warning SQLCODE when dynamic SQL statements exceed the predictive governing warning threshold. DB2 does return an error SQLCODE when dynamic SQL statements exceed the predictive governing error threshold. DB2 returns the error SQLCODE for an EXECUTE or OPEN statement.
627
628
629
for each statement. This results in increased network traffic and processor costs.
DB2 for z/OS Perform SQL processing EXEC SQL UPDATE Perform SQL processing EXEC SQL INSERT Perform SQL processing
Figure 193 shows processing with stored procedures. The same series of SQL statements that are illustrated in Figure 192 uses a single send and receive operation, reducing network traffic and the cost of processing these statements.
z/OS system DB2 stored procedures region EXEC SQL DECLARE C1... EXEC SQL OPEN C1... EXEC SQL UPDATE...
Client
Perform SQL
PROCX end
630
1. Receives a set of parameters containing the data for one row of the employee to project activity table (DSN8810.EMPPROJACT). These parameters are input parameters in the SQL statement CALL: v EMP: employee number v PRJ: project number v ACT: activity ID v EMT: percent of employees time required v EMS: date the activity starts v EME: date the activity is due to end 2. Declares a cursor, C1, with the option WITH RETURN, that is used to return a result set containing all rows in EMPPROJACT to the workstation application that called the stored procedure. 3. Queries table EMPPROJACT to determine whether a row exists where columns PROJNO, ACTNO, EMSTDATE, and EMPNO match the values of parameters PRJ, ACT, EMS, and EMP. (The table has a unique index on those columns. There is at most one row with those values.) 4. If the row exists, executes an SQL statement UPDATE to assign the values of parameters EMT and EME to columns EMPTIME and EMENDATE. 5. If the row does not exist (SQLCODE +100), executes an SQL statement INSERT to insert a new row with all the values in the parameter list. 6. Opens cursor C1. This causes the result set to be returned to the caller when the stored procedure ends. 7. Returns two parameters, containing these values: v A code to identify the type of SQL statement last executed: UPDATE or INSERT. v The SQLCODE from that statement. Figure 194 on page 632 illustrates the steps that are involved in executing this stored procedure.
631
Notes
User Workstation
DB2 System
1 2 3
EXEC SQL CONNECT TO LOCA; EXEC SQL CALL A(:EMP , :PRJ,:ACT,:EMT, :EMS,:EME, :TYPE,:CODE);
Create Thread
Get information from SYSIBM. SYSROUTINES Prepare parameter list and pass control to stored procedure EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT * FROM EMPPROJACT; USE SQL UPDATE to update EMPPROJACT with input parameter values If SQLCODE=+100, use SQL INSERT to add a row with the values in the parameter list EXEC SQL OPEN C1;
Return output parameters :TYPE and :CODE and a result set that contains all rows in EMPPROJACT Control returns to application EXEC SQL COMMIT; (or ROLLBACK)
8 9
10
Notes to Figure 194: 1. The workstation application uses the SQL CONNECT statement to create a conversation with DB2. 2. DB2 creates a DB2 thread to process SQL requests.
632
3. The SQL statement CALL tells the DB2 server that the application is going to run a stored procedure. The calling application provides the necessary parameters. 4. The plan for the client application contains information from catalog table SYSROUTINES about stored procedure A. DB2 caches all rows in the table associated with A, so future references to A do not require I/O to the table. 5. DB2 passes information about the request to the stored procedures address space, and the stored procedure begins execution. 6. The stored procedure executes SQL statements. DB2 verifies that the owner of the package or plan containing the SQL statement CALL has EXECUTE authority for the package associated with the DB2 stored procedure. One of the SQL statements opens a cursor that has been declared WITH RETURN. This causes a result set to be returned to the workstation application when the procedure ends. Any SQLCODE that is issued within an external stored procedure is not returned to the workstation application in the SQLCA (as the result of the CALL statement). 7. If an error is not encountered, the stored procedure assigns values to the output parameters and exits. Control returns to the DB2 stored procedures address space, and from there to the DB2 system. If the stored procedure definition contains COMMIT ON RETURN NO, DB2 does not commit or roll back any changes from the SQL in the stored procedure until the calling program executes an explicit COMMIT or ROLLBACK statement. If the stored procedure definition contains COMMIT ON RETURN YES, and the stored procedure executed successfully, DB2 commits all changes. 8. Control returns to the calling application, which receives the output parameters and the result set. DB2 then: v Closes all cursors that the stored procedure opened, except those that the stored procedure opened to return result sets. v Discards all SQL statements that the stored procedure prepared. v Reclaims the working storage that the stored procedure used. The application can call more stored procedures, or it can execute more SQL statements. DB2 receives and processes the COMMIT or ROLLBACK request. The COMMIT or ROLLBACK operation covers all SQL operations, whether executed by the application or by stored procedures, for that unit of work. If the application involves IMS or CICS, similar processing occurs based on the IMS or CICS sync point rather than on an SQL COMMIT or ROLLBACK statement. 9. DB2 returns a reply message to the application describing the outcome of the COMMIT or ROLLBACK operation. 10. The workstation application executes the following steps to retrieve the contents of table EMPPROJACT, which the stored procedure has returned in a result set: a. Declares a result set locator for the result set being returned. b. Executes the ASSOCIATE LOCATORS statement to associate the result set locator with the result set. c. Executes the ALLOCATE CURSOR statement to associate a cursor with the result set. d. Executes the FETCH statement with the allocated cursor multiple times to retrieve the rows in the result set.
Chapter 25. Using stored procedures for client/server processing
633
| | | | | | |
634
Parameter style
Address space for stored procedures Package collection WLM environment How long a stored procedure can run Load module stays in memory Program type Security
635
Table 77. Characteristics of a stored procedure (continued) Characteristic Commit work on return from stored procedure Call with null arguments Pass DB2 environment information Encoding scheme for all string parameters CREATE/ALTER PROCEDURE parameter COMMIT ON RETURN YES COMMIT ON RETURN NO CALLED ON NULL INPUT NO DBINFO DBINFO4 PARAMETER CCSID EBCDIC PARAMETER CCSID ASCII PARAMETER CCSID UNICODE PARAMETER VARCHAR NULTERM PARAMETER VARCHAR STRUCTURE5 STOP AFTER SYSTEM DEFAULT FAILURES STOP AFTER n FAILURES CONTINUE AFTER FAILURE
# For procedures that are defined as LANGUAGE C # procedures, the representation of VARCHAR parameters. | Number of abnormal terminations before the stored | procedure is stopped
Notes: 1. This value is invalid for a REXX stored procedure.
| 2. This value is ignored for a REXX stored procedure. Specifying PROGRAM TYPE SUB with REXX will not result in an error; however, a value of MAIN will be stored in the DB2 catalog and used at runtime. |
3. This value is ignored for a REXX stored procedure.
| 4. DBINFO is valid only with PARAMETER STYLE SQL. # 5. The PARAMETER VARCHAR clause can be specified in CREATE PROCEDURE statements only.
For a complete explanation of the parameters in a CREATE PROCEDURE or ALTER PROCEDURE statement, see Chapter 2 of DB2 SQL Reference.
636
Subsystem code page A 48-byte structure that consists of 10 integer fields and an eight-byte reserved area. These fields provide information about the CCSIDs of the subsystem from which the stored procedure is invoked. Table qualifier length An unsigned 2-byte integer field. This field contains 0. Table qualifier A 128-byte character field. This field is not used for stored procedures. Table name length An unsigned 2-byte integer field. This field contains 0. Table name A 128-byte character field. This field is not used for stored procedures. Column name length An unsigned 2-byte integer field. This field contains 0. Column name A 128-byte character field. This field is not used for stored procedures. Product information An 8-byte character field that identifies the product on which the stored procedure executes. This field has the form pppvvrrm, where: v ppp is a 3-byte product code: ARI DSN QSQ DB2 Server for VSE & VM DB2 UDB for z/OS DB2 UDB for iSeries
SQL DB2 UDB for Linux, UNIX, and Windows v vv is a two-digit version identifier. v rr is a two-digit release identifier. v m is a one-digit maintenance level identifier. | | Reserved area 2 bytes. Operating system A 4-byte integer field. It identifies the operating system on which the program that invokes the user-defined function runs. The value is one of these: 0 1 3 4 5 6 7 8 13 15 16 Unknown OS/2 Windows AIX Windows NT HP-UX Solaris z/OS Siemens Nixdorf Windows 95 SCO UNIX
Chapter 25. Using stored procedures for client/server processing
637
| |
18 19 24
Linux DYNIX/ptx Linux for S/390 Linux for zSeries Linux/IA64 Linux/PPC Linux/PPC64 Linux/AMD64 iSeries
| | | | | |
25 26 27 28 29 400
Number of entries in table function column list An unsigned 2-byte integer field. This field contains 0. | Reserved area 26 bytes. Table function column list pointer This field is not used for stored procedures. Unique application identifier This field is a pointer to a string that uniquely identifies the applications connection to DB2. The string is regenerated at for each connection to DB2. The string is the LUWID, which consists of a fully-qualified LU network name followed by a period and an LUW instance number. The LU network name consists of a one- to eight-character network ID, a period, and a one- to eight-character network LU name. The LUW instance number consists of 12 hexadecimal characters that uniquely identify the unit of work. Reserved area 20 bytes. See Linkage conventions on page 684 for an example of coding the DBINFO parameter list in a stored procedure.
638
v It is part of the WLM application environment named PAYROLL. v It runs as a main program. v It does not access non-DB2 resources, so it does not need a special RACF environment. v It can return at most 10 result sets. v When control returns to the client program, DB2 should not commit updates automatically. This CREATE PROCEDURE statement defines the stored procedure to DB2:
CREATE PROCEDURE B(IN V1 INTEGER, OUT V2 CHAR(9)) LANGUAGE C DETERMINISTIC NO SQL EXTERNAL NAME SUMMOD COLLID SUMCOLL ASUTIME LIMIT 900 PARAMETER STYLE GENERAL WITH NULLS STAY RESIDENT NO RUN OPTIONS MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON) WLM ENVIRONMENT PAYROLL PROGRAM TYPE MAIN SECURITY DB2 DYNAMIC RESULT SETS 10 COMMIT ON RETURN NO;
Later, you need to make the following changes to the stored procedure definition: v It selects data from DB2 tables but does not modify DB2 data. v The parameters can have null values, and the stored procedure can return a diagnostic string. v The length of time the stored procedure runs should not be limited. v If the stored procedure is called by another stored procedure or a user-defined function, the stored procedure uses the WLM environment of the caller. Execute this ALTER PROCEDURE statement to make the necessary changes:
ALTER PROCEDURE B READS SQL DATA ASUTIME NO LIMIT PARAMETER STYLE SQL WLM ENVIRONMENT (PAYROLL,*);
639
For DB2-established address spaces: Use the DB2 commands START PROCEDURE and STOP PROCEDURE to perform all of these tasks. For WLM-established address spaces: v If WLM is operating in goal mode: Use this z/OS command to refresh a WLM environment when you need to load a new version of a stored procedure. Refreshing the WLM environment starts a new instance of each address space that is active for this WLM environment. Existing address spaces stop when the current requests that are executing in those address spaces complete.
VARY WLM,APPLENV=name,REFRESH
name is the name of a WLM application environment associated with a group of stored procedures. When you execute this command, you affect all stored procedures that are associated with the application environment. You can call the DB2-supplied stored procedure WLM_REFRESH to refresh a WLM environment from a remote workstation. For information about WLM_REFRESH, see WLM environment refresh stored procedure (WLM_REFRESH) on page 1129. Use this z/OS command to stop all stored procedures address spaces that are associated with WLM application environment name. The address spaces stop when the current requests that are executing in those address spaces complete.
VARY WLM,APPLENV=name,QUIESCE
Use this z/OS command to start all stored procedures address spaces that are associated with WLM application environment name. New address spaces start when all JCL changes are established. Until that time, work requests that use the new address spaces are queued.
VARY WLM,APPLENV=name,RESUME
See z/OS MVS Planning: Workload Management for more information about the command VARY WLM. v If WLM is operating in compatibility mode: Use this z/OS command to stop a WLM-established stored procedures address space.
CANCEL address-space-name
Use this z/OS command to start a WLM-established stored procedures address space.
START address-space-name
In compatibility mode, you must stop and start stored procedures address spaces when you refresh Language Environment.
640
2. Define WLM application environments for groups of stored procedures and associate a JCL startup procedure with each application environment. See Part 5 (Volume 2) of DB2 Administration Guide for information about how to do this. 3. Enter the DB2 command STOP PROCEDURE(*) to stop all activity in the DB2-established stored procedures address space. 4. For each stored procedure, execute ALTER PROCEDURE with the WLM ENVIRONMENT parameter to specify the name of the application environment. 5. Relink all of your existing stored procedures with DSNRLI, the language interface module for the Resource Recovery Services attachment facility (RRSAF). Use JCL and linkage editor control statements similar to those shown in Figure 195. # # # # # # # # # # # # #
//LINKRRS EXEC PGM=IEWL, // PARM=LIST,XREF,MAP //SYSPRINT DD SYSOUT=* //SYSLIB DD DISP=SHR,DSN=USER.RUNLIB.LOAD // DD DISP=SHR,DSN=DSN810.SDSNLOAD //SYSLMOD DD DISP=SHR,DSN=USER.RUNLIB.LOAD //SYSUT1 DD SPACE=(1024,(50,50)),UNIT=SYSDA //SYSLIN DD * ENTRY STORPROC REPLACE DSNALI INCLUDE SYSLIB(DSNRLI) INCLUDE SYSLMOD(STORPROC) NAME STORPROC(R)
6. If WLM is operating in compatibility mode, start the new WLM-established stored procedures address spaces by using this z/OS command:
START address-space-name
641
Environment. Your COBOL and C++ stored procedures can contain object-oriented extensions. See Coding considerations for C and C++ on page 186 and Coding considerations for object-oriented extensions in COBOL on page 219 for information about including object-oriented extensions in SQL applications. For a list of the minimum compiler and Language Environment requirements, see DB2 Release Planning Guide.For information about writing Java stored procedures, see DB2 Application Programming Guide and Reference for Java. For information about writing REXX stored procedures, see Writing a REXX stored procedure on page 654. The program that calls the stored procedure can be in any language that supports the SQL CALL statement. ODBC applications can use an escape clause to pass a stored procedure call to DB2.
| | | | | | | | | | | | | | | | | | |
642
v A single copy of the stored procedure can be shared by multiple tasks in the stored procedures address space. This decreases the amount of virtual storage used for code in the stored procedures address space. To prepare a stored procedure as reentrant, compile it as reentrant and link-edit it as reentrant and reusable. # # # # For instructions on compiling programs to be reentrant, see the appropriate language manual. For information about using the binder to produce reentrant and reusable load modules, see z/OS MVS: Program Management User's Guide and Reference. To make a reentrant stored procedure remain resident in storage, specify STAY RESIDENT YES in the CREATE PROCEDURE or ALTER PROCEDURE statement for the stored procedure. If your stored procedure cannot be reentrant, link-edit it as non-reentrant and non-reusable. The non-reusable attribute prevents multiple tasks from using a single copy of the stored procedure at the same time. A non-reentrant stored procedure must not remain in storage. You therefore need to specify STAY RESIDENT NO in the CREATE PROCEDURE or ALTER PROCEDURE statement for the stored procedure.
# # # # #
643
Table 78. Characteristics of main programs and subprograms Language Assembler Main program MAIN=YES is specified in the invocation of the CEEENTRY macro. Contains a main() function. Pass parameters to it through argc and argv. A COBOL program that ends with GOBACK Subprogram MAIN=NO is specified in the invocation of the CEEENTRY macro. A fetchable function. Pass parameters to it explicitly. A dynamically loaded subprogram that ends with GOBACK
COBOL PL/I
644
/*************************************************************/ /* Receive input parameter values into local variables. */ /*************************************************************/ strcpy(parm1,p1); parm2 = *p2; parm3 = *p3; /*************************************************************/ /* Perform operations on local variables. */ /*************************************************************/ . . . /*************************************************************/ /* Set values to be passed back to the caller. */ /*************************************************************/ strcpy(parm1,"SETBYSP"); parm2 = 100; parm3 = 200; /*************************************************************/ /* Copy values to output parameters. */ /*************************************************************/ strcpy(p1,parm1); *p2 = parm2; *p3 = parm3; } Figure 196. A C stored procedure coded as a subprogram (Part 2 of 2)
645
/*************************************************************/ /* Receive input parameter values into local variables. */ /*************************************************************/ strcpy(parm1,p1); parm2 = *p2; parm3 = *p3; /*************************************************************/ /* Perform operations on local variables. */ /*************************************************************/ . . . /*************************************************************/ /* Set values to be passed back to the caller. */ /*************************************************************/ strcpy(parm1,"SETBYSP"); parm2 = 100; parm3 = 200; /*************************************************************/ /* Copy values to output parameters. */ /*************************************************************/ strcpy(p1,parm1); *p2 = parm2; *p3 = parm3; } Figure 197. A C++ stored procedure coded as a subprogram (Part 2 of 2)
A stored procedure that runs in a DB2-established address space must contain a main program.
646
A ROLLBACK statement has the same effect on cursors in a stored procedure as it has on cursors in stand-alone programs. A ROLLBACK statement closes all open cursors. A COMMIT statement in a stored procedure closes cursors that are not declared WITH HOLD, and leaves cursors open that are declared WITH HOLD. The effect of COMMIT or ROLLBACK on cursors applies to cursors that are declared in the calling application, as well as cursors that are declared in the stored procedure. Under the following conditions, you cannot include COMMIT or ROLLBACK statements in a stored procedure: v The stored procedure is nested within a trigger or a user-defined function. v The stored procedure is called by a client that uses two-phase commit processing. v The client program uses a type 2 connection to connect to the remote server that contains the stored procedure. v DB2 is not the commit coordinator. If a COMMIT or ROLLBACK statement in a stored procedure violates any of the previous conditions, DB2 puts the transaction in a must-rollback state, and the CALL statement returns a 751 SQLCODE.
Special register
| | | | | |
CURRENT CLIENT_ACCTNG CURRENT CLIENT_APPLNAME CURRENT CLIENT_USERID CURRENT CLIENT_WRKSTNNAME CURRENT APPLICATION ENCODING SCHEME CURRENT DATE
Not applicable5
CURRENT DEGREE
Yes
|
CURRENT LOCALE LC_CTYPE Inherited from invoking application
Yes
647
Table 79. Characteristics of special registers in a stored procedure (continued) Initial value when INHERIT SPECIAL REGISTERS option is specified Inherited from invoking application New value for each SET host-variable=CURRENT MEMBER statement The value of bind option OPTHINT for the stored procedure package or inherited from invoking application6 Inherited from invoking application4 Inherited from invoking application9 The value of bind option PATH for the stored procedure package or inherited from invoking application6 Inherited from invoking application Inherited from invoking application Inherited from invoking application Inherited from invoking application Inherited from invoking application Initial value when DEFAULT SPECIAL REGISTERS option is specified The value of field CURRENT MAINT TYPES on installation panel DSNTIP8 New value for each SET host-variable=CURRENT MEMBER statement The value of bind option OPTHINT for the stored procedure package Inherited from invoking application4 Inherited from invoking application9 The value of bind option PATH for the stored procedure package The value of field DECIMAL ARITHMETIC on installation panel DSNTIP4 The value of field CURRENT REFRESH AGE on installation panel DSNTIP8 The value of bind option SQLRULES for the stored procedure package The value of CURRENT SQLID when the stored procedure is entered Inherited from invoking application Procedure can use SET to modify? Yes
Special register
No
Yes
CURRENT PACKAGESET
CURRENT PRECISION
Yes
Yes
CURRENT RULES
Yes
| CURRENT SCHEMA
Yes
Yes
The primary authorization ID of The primary authorization ID of Yes8 the application process or the application process inherited from invoking application7 New value for each SQL statement in the stored procedure package2 New value for each SQL statement in the stored procedure package2 Inherited from invoking application Inherited from invoking application Primary authorization ID of the application process New value for each SQL statement in the stored procedure package2 New value for each SQL statement in the stored procedure package2 Inherited from invoking application A string of 0 length Primary authorization ID of the application process Not applicable5
CURRENT TIME
CURRENT TIMESTAMP
Not applicable5
CURRENT TIMEZONE
| ENCRYPTION PASSWORD
USER
648
Table 79. Characteristics of special registers in a stored procedure (continued) Initial value when INHERIT SPECIAL REGISTERS option is specified Initial value when DEFAULT SPECIAL REGISTERS option is specified Procedure can use SET to modify?
1. If the ENCODING bind option is not specified, the initial value is the value that was specified in field APPLICATION ENCODING of installation panel DSNTIPF. 2. If the stored procedure is invoked within the scope of a trigger, DB2 uses the timestamp for the triggering SQL statement as the timestamp for all SQL statements in the function package. 3. DB2 allows parallelism at only one level of a nested SQL statement. If you set the value of the CURRENT DEGREE special register to ANY, and parallelism is disabled, DB2 ignores the CURRENT DEGREE value. 4. If the stored procedure definer specifies a value for COLLID in the CREATE PROCEDURE statement, DB2 sets CURRENT PACKAGESET to the value of COLLID. 5. Not applicable because no SET statement exists for the special register. 6. If a program within the scope of the invoking application issues a SET statement for the special register before the stored procedure is invoked, the special register inherits the value from the SET statement. Otherwise, the special register contains the value that is set by the bind option for the stored procedure package. 7. If a program within the scope of the invoking application issues a SET CURRENT SQLID statement before the stored procedure is invoked, the special register inherits the value from the SET statement. Otherwise, CURRENT SQLID contains the authorization ID of the application process. 8. If the stored procedure package uses a value other than RUN for the DYNAMICRULES bind option, the SET CURRENT SQLID statement can be executed but does not affect the authorization ID that is used for the dynamic SQL statements in the stored procedure package. The DYNAMICRULES value determines the authorization ID that is used for dynamic SQL statements. See Using DYNAMICRULES to specify behavior of dynamic SQL statements on page 502 for more information about DYNAMICRULES values and authorization IDs. 9. If the stored procedure definer specifies a value for COLLID in the CREATE PROCEDURE statement, DB2 sets CURRENT PACKAGE PATH to an empty string.
649
650
DB2 does not return result sets for cursors that are closed before the stored procedure terminates. The stored procedure must execute a CLOSE statement for each cursor associated with a result set that should not be returned to the DRDA client. Example: Declaring a cursor to return a result set: Suppose you want to return a result set that contains entries for all employees in department D11. First, declare a cursor that describes this subset of employees:
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT * FROM DSN8810.EMP WHERE WORKDEPT=D11;
DB2 returns the result set and the name of the SQL cursor for the stored procedure to the client. Use meaningful cursor names for returning result sets: The name of the cursor that is used to return result sets is made available to the client application through extensions to the DESCRIBE statement. See Writing a DB2 UDB for z/OS client program or SQL procedure to receive result sets on page 708 for more information. Use cursor names that are meaningful to the DRDA client application, especially when the stored procedure returns multiple result sets. Objects from which you can return result sets: You can use any of these objects in the SELECT statement that is associated with the cursor for a result set: v Tables, synonyms, views, created temporary tables, declared temporary tables, and aliases defined at the local DB2 subsystem v Tables, synonyms, views, created temporary tables, and aliases defined at remote DB2 UDB for z/OS systems that are accessible through DB2 private protocol access Returning a subset of rows to the client: If you execute FETCH statements with a result set cursor, DB2 does not return the fetched rows to the client program. For example, if you declare a cursor WITH RETURN and then execute the statements OPEN, FETCH, and FETCH, the client receives data beginning with the third row in the result set. If the result set cursor is scrollable and you fetch rows with it, you need to position the cursor before the first row of the result table after you fetch the rows and before the stored procedure ends. Using a temporary table to return result sets: You can use a created temporary table or declared temporary table to return result sets from a stored procedure. This capability can be used to return nonrelational data to a DRDA client. For example, you can access IMS data from a stored procedure in the following way: v Use APPC/MVS to issue an IMS transaction. v Receive the IMS reply message, which contains data that should be returned to the client. v Insert the data from the reply message into a temporary table.
651
v Open a cursor against the temporary table. When the stored procedure ends, the rows from the temporary table are returned to the client.
That allows an application running under authorization ID JONES to call stored procedure SPSCHEMA.STORPRCA. Preparing a stored procedure to run as an authorized program: If your stored procedure runs in a WLM-established address space, you can run it as a z/OS authorized program. To prepare a stored procedure to run as an authorized program, do these additional things: v When you link-edit the stored procedure: Indicate that the load module can use restricted system services by specifying the parameter value AC=1.
652
Put the load module for the stored procedure in an APF-authorized library. v Be sure that the stored procedure runs in an address space with a startup procedure in which all libraries in the STEPLIB concatenation are APF-authorized. Specify an application environment WLM ENVIRONMENT parameter of the CREATE PROCEDURE or ALTER PROCEDURE statement for the stored procedure that ensures that the stored procedure runs in an address space with this characteristic.
. . .
Package B Program B
When you bind a stored procedure: v Use the command BIND PACKAGE to bind the stored procedure. If you use the option ENABLE to control access to a stored procedure package, you must enable the system connection type of the application that executes the CALL statement. v The package for the stored procedure does not need to be bound with the plan for the program that calls it. v The owner of the package that contains the SQL statement CALL must have the EXECUTE privilege on all packages that the stored procedure accesses, including packages named in SET CURRENT PACKAGESET. The following must exist at the server, as shown in Figure 198: v A plan or package containing the SQL statement CALL. This package is associated with the client program. v A package associated with the stored procedure. The server program might use more than one package. These packages come from two sources: v A DBRM that you bind several times into several versions of the same package, all with the same package name, which can then reside in different collections.
653
# # #
Your stored procedure can switch from one version to another by using the statement SET CURRENT PACKAGESET. v A package associated with another program that contains SQL statements that the stored procedure calls. Important: A package for a subprogram that contains SQL statements must exist at the location where the stored procedure is defined and at the location where the SQL statements are executed.
| | |
Figure 200 on page 655 shows an example of a REXX stored procedure that executes DB2 commands. The stored procedure performs the following actions: v Receives one input parameter, which contains a DB2 command. v Calls the IFI COMMAND function to execute the command. v Extracts the command result messages from the IFI return area and places the messages in a created temporary table. Each row of the temporary table contains a sequence number and the text of one message. v Opens a cursor to return a result set that contains the command result messages. v Returns the unformatted contents of the IFI return area in an output parameter. Figure 199 on page 655 shows the definition of the stored procedure.
654
CREATE PROCEDURE COMMAND(IN CMDTEXT VARCHAR(254), OUT CMDRESULT VARCHAR(32704)) LANGUAGE REXX EXTERNAL NAME COMMAND NO COLLID ASUTIME NO LIMIT PARAMETER STYLE GENERAL STAY RESIDENT NO RUN OPTIONS TRAP(ON) WLM ENVIRONMENT WLMENV1 SECURITY DB2 DYNAMIC RESULT SETS 1 COMMIT ON RETURN NO; Figure 199. Definition for REXX stored procedure COMMAND
Figure 200 shows the COMMAND stored procedure that executes DB2 commands.
/* REXX */ PARSE UPPER ARG CMD /* Get the DB2 command text */ /* Remove enclosing quotes */ IF LEFT(CMD,2) = """ & RIGHT(CMD,2) = """ THEN CMD = SUBSTR(CMD,2,LENGTH(CMD)-2) ELSE IF LEFT(CMD,2) = """" & RIGHT(CMD,2) = """" THEN CMD = SUBSTR(CMD,3,LENGTH(CMD)-4) COMMAND = SUBSTR("COMMAND",1,18," ") /****************************************************************/ /* Set up the IFCA, return area, and output area for the */ /* IFI COMMAND call. */ /****************************************************************/ IFCA = SUBSTR(00X,1,180,00X) IFCA = OVERLAY(D2C(LENGTH(IFCA),2),IFCA,1+0) IFCA = OVERLAY("IFCA",IFCA,4+1) RTRNAREASIZE = 262144 /*1048572*/ RTRNAREA = D2C(RTRNAREASIZE+4,4)LEFT( ,RTRNAREASIZE, ) OUTPUT = D2C(LENGTH(CMD)+4,2)||0000X||CMD BUFFER = SUBSTR(" ",1,16," ") /****************************************************************/ /* Make the IFI COMMAND call. */ /****************************************************************/ ADDRESS LINKPGM "DSNWLIR COMMAND IFCA RTRNAREA OUTPUT" WRC = RC RTRN= SUBSTR(IFCA,12+1,4) REAS= SUBSTR(IFCA,16+1,4) TOTLEN = C2D(SUBSTR(IFCA,20+1,4)) /****************************************************************/ /* Set up the host command environment for SQL calls. */ /****************************************************************/ "SUBCOM DSNREXX" /* Host cmd env available? */ IF RC THEN /* No--add host cmd env */ S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) Figure 200. Example of a REXX stored procedure: COMMAND (Part 1 of 3)
655
/****************************************************************/ /* Set up SQL statements to insert command output messages */ /* into a temporary table. */ /****************************************************************/ SQLSTMT=INSERT INTO SYSIBM.SYSPRINT(SEQNO,TEXT) VALUES(?,?) ADDRESS DSNREXX "EXECSQL DECLARE C1 CURSOR FOR S1" IF SQLCODE = 0 THEN CALL SQLCA ADDRESS DSNREXX "EXECSQL PREPARE S1 FROM :SQLSTMT" IF SQLCODE = 0 THEN CALL SQLCA /****************************************************************/ /* Extract messages from the return area and insert them into */ /* the temporary table. */ /****************************************************************/ SEQNO = 0 OFFSET = 4+1 DO WHILE ( OFFSET < TOTLEN ) LEN = C2D(SUBSTR(RTRNAREA,OFFSET,2)) SEQNO = SEQNO + 1 TEXT = SUBSTR(RTRNAREA,OFFSET+4,LEN-4-1) ADDRESS DSNREXX "EXECSQL EXECUTE S1 USING :SEQNO,:TEXT" IF SQLCODE = 0 THEN CALL SQLCA OFFSET = OFFSET + LEN END /****************************************************************/ /* Set up a cursor for a result set that contains the command */ /* output messages from the temporary table. */ /****************************************************************/ SQLSTMT=SELECT SEQNO,TEXT FROM SYSIBM.SYSPRINT ORDER BY SEQNO ADDRESS DSNREXX "EXECSQL DECLARE C2 CURSOR FOR S2" IF SQLCODE = 0 THEN CALL SQLCA ADDRESS DSNREXX "EXECSQL PREPARE S2 FROM :SQLSTMT" IF SQLCODE = 0 THEN CALL SQLCA /****************************************************************/ /* Open the cursor to return the message output result set to */ /* the caller. */ /****************************************************************/ ADDRESS DSNREXX "EXECSQL OPEN C2" IF SQLCODE = 0 THEN CALL SQLCA S_RC = RXSUBCOM(DELETE,DSNREXX,DSNREXX) /* REMOVE CMD ENV */ EXIT SUBSTR(RTRNAREA,1,TOTLEN+4) Figure 200. Example of a REXX stored procedure: COMMAND (Part 2 of 3)
656
/****************************************************************/ /* Routine to display the SQLCA */ /****************************************************************/ SQLCA: SAY SQLCODE =SQLCODE SAY SQLERRMC =SQLERRMC SAY SQLERRP =SQLERRP SAY SQLERRD =SQLERRD.1,, || SQLERRD.2,, || SQLERRD.3,, || SQLERRD.4,, || SQLERRD.5,, || SQLERRD.6 SAY SQLWARN =SQLWARN.0,, || SQLWARN.1,, || SQLWARN.2,, || SQLWARN.3,, || SQLWARN.4,, || SQLWARN.5,, || SQLWARN.6,, || SQLWARN.7,, || SQLWARN.8,, || SQLWARN.9,, || SQLWARN.10 SAY SQLSTATE=SQLSTATE SAY SQLCODE =SQLCODE EXIT SQLERRMC =SQLERRMC; , || SQLERRP =SQLERRP; , || SQLERRD =SQLERRD.1,, || SQLERRD.2,, || SQLERRD.3,, || SQLERRD.4,, || SQLERRD.5,, || SQLERRD.6; , || SQLWARN =SQLWARN.0,, || SQLWARN.1,, || SQLWARN.2,, || SQLWARN.3,, || SQLWARN.4,, || SQLWARN.5,, || SQLWARN.6,, || SQLWARN.7,, || SQLWARN.8,, || SQLWARN.9,, || SQLWARN.10; , || SQLSTATE=SQLSTATE; Figure 200. Example of a REXX stored procedure: COMMAND (Part 3 of 3)
657
v Write a CREATE PROCEDURE statement for the SQL procedure. Then use one of the methods in Preparing an SQL procedure on page 669 to define the SQL procedure to DB2 and create an executable procedure. This section discusses how to write a and prepare an SQL procedure. The following topics are included: v Comparison of an SQL procedure and an external procedure v Statements that you can include in a procedure body on page 659 v Terminating statements in an SQL procedure on page 662 v Handling SQL conditions in an SQL procedure on page 663 v Examples of SQL procedures on page 667 v Preparing an SQL procedure on page 669 For information about the syntax of the CREATE PROCEDURE statement and the procedure body, see DB2 SQL Reference.
658
stored procedure by executing the ALTER PROCEDURE statement. For an SQL procedure, you define the stored procedure to DB2 by preprocessing a CREATE PROCEDURE statement, then executing the CREATE PROCEDURE statement dynamically. As with an external stored procedure, you change the definition by executing the ALTER PROCEDURE statement. You cannot change the procedure body with the ALTER PROCEDURE statement. See Preparing an SQL procedure on page 669 for more information about defining an SQL procedure to DB2. Figure 201 shows a definition for an external stored procedure that is written in COBOL. The stored procedure program, which updates employee salaries, is called UPDSAL.
CREATE PROCEDURE UPDATESALARY1 (IN EMPNUMBR CHAR(10), IN RATE DECIMAL(6,2)) LANGUAGE COBOL EXTERNAL NAME UPDSAL; 1 2 3 4
Notes to Figure 201: 1 2 3 The stored procedure name is UPDATESALARY1. The two parameters have data types of CHAR(10) and DECIMAL(6,2). Both are input parameters. LANGUAGE COBOL indicates that this is an external procedure, so the code for the stored procedure is in a separate, COBOL program. The name of the load module that contains the executable stored procedure program is UPDSAL.
Notes to Figure 202: 1 2 3 4 The stored procedure name is UPDATESALARY1. The two parameters have data types of CHAR(10) and DECIMAL(6,2). Both are input parameters. LANGUAGE SQL indicates that this is an SQL procedure, so a procedure body follows the other parameters. The procedure body consists of a single SQL UPDATE statement, which updates rows in the employee table.
659
Assignment statement Assigns a value to an output parameter or to an SQL variable, which is a variable that is defined and used only within a procedure body. The right side of an assignment statement can include SQL built-in functions. CALL statement Calls another stored procedure. This statement is similar to the CALL statement described in Chapter 5 of DB2 SQL Reference, except that the parameters must be SQL variables, parameters for the SQL procedure, or constants. CASE statement Selects an execution path based on the evaluation of one or more conditions. This statement is similar to the CASE expression, which is described in Chapter 2 of DB2 SQL Reference. GET DIAGNOSTICS statement Obtains information about the previous SQL statement that was executed. An example of its usage is shown in Using GET DIAGNOSTICS in a handler on page 664. GOTO statement Transfers program control to a labelled statement. IF statement Selects an execution path based on the evaluation of a condition. | | ITERATE statement Transfers program control to beginning of a labelled loop. LEAVE statement Transfers program control out of a loop or a block of code. LOOP statement Executes a statement or group of statements multiple times. REPEAT statement Executes a statement or group of statements until a search condition is true. WHILE statement Repeats the execution of a statement or group of statements while a specified condition is true. Compound statement Can contain one or more of any of the other types of statements in this list. In addition, a compound statement can contain SQL variable declarations, condition handlers, or cursor declarations. The order of statements in a compound statement must be: 1. SQL variable and condition declarations 2. Cursor declarations 3. Handler declarations 4. Procedure body statements (CALL, CASE, IF, LOOP, REPEAT, WHILE, SQL) SQL statement A subset of the SQL statements that are described in Chapter 5 of DB2 SQL Reference. Certain SQL statements are valid in a compound statement, but not valid if the SQL statement is the only statement in the procedure body. Appendix C of DB2 SQL Reference lists the SQL statements that are valid in an SQL procedure.
660
| | | | | | | | | | | | |
SIGNAL statement Enables an SQL procedure to raise a condition with a specific SQLSTATE and message text. This statement is described in Using SIGNAL or RESIGNAL to raise a condition on page 665. RESIGNAL statement Enables a condition handler within an SQL procedure to raise a condition with a specific SQLSTATE and message text, or to return the same condition that activated the handler. This statement is described in Using SIGNAL or RESIGNAL to raise a condition on page 665. RETURN statement Returns an integer status value for the SQL procedure. This statement is described in Using the RETURN statement for the procedure status on page 665. See the discussion of the procedure body in DB2 SQL Reference for detailed descriptions and syntax of each of these statements.
# #
The general form of a declaration for an SQL variable that you use as a result set locator is:
DECLARE SQL-variable-name data-type RESULT_SET_LOCATOR VARYING;
# # #
SQL variables have these restrictions: v SQL variable names can be up to 128 bytes in length. They can include alphanumeric characters and the underscore character. Condition names and label names also have these restrictions. v Because DB2 folds all SQL variables to uppercase, you cannot declare two SQL variables that are the same except for case. For example, you cannot declare two SQL variables named varx and VARX. v Although it is not recommended, you can specify an SQL reserved word as the name of an SQL parameter, SQL variable, or SQL condition in some contexts. If you specify a reserved word as the name of an SQL parameter, SQL variable, or SQL condition in a context where its use could be ambiguous, specify the name as a delimited identifier. v When you use an SQL variable in an SQL statement, do not precede the variable with a colon. v When you call a user-defined function from an SQL procedure, and the user-defined function definition includes parameters of type CHAR, you need to cast the corresponding parameter values in the user-defined function invocation to CHAR to ensure that DB2 invokes the correct function. For example, suppose that an SQL procedure calls user-defined function CVRTNUM, which takes one
# # # # #
661
input parameter of type CHAR(6). Also suppose that you declare SQL variable EMPNUMBR in the SQL procedure. When you invoke CVRTNUM, cast EMPNUMBR to CHAR:
UPDATE EMP SET EMPNO=CVRTNUM(CHAR(EMPNUMBR)) WHERE EMPNO = EMPNUMBR;
v Within a procedure body, the following rules apply to IN, OUT, and INOUT parameters: You can use a parameter that you define as IN on the left or right side of an assignment statement. However, if you assign a value to an IN parameter, you cannot pass the new value back to the caller. The IN parameter has the same value before and after the SQL procedure is called. You can use a parameter that you define as OUT on the left or right side of an assignment statement. The last value that you assign to the parameter is the value that is returned to the caller. You can use a parameter that you define as INOUT on the left or right side of an assignment statement. The caller determines the first value of the INOUT parameter, and the last value that you assign to the parameter is the value that is returned to the caller. You can perform any operations on SQL variables that you can perform on host variables in SQL statements. Qualifying SQL variable names and other object names is a good way to avoid ambiguity. Use the following guidelines to determine when to qualify variable names: v When you use an SQL procedure parameter in the procedure body, qualify the parameter name with the procedure name. v Specify a label for each compound statement, and qualify SQL variable names in the compound statement with that label. v Qualify column names with the associated table or view names. Recommendation: Because the way that DB2 determines the qualifier for unqualified names might change in the future, qualify all SQL variable names to avoid changing your code later.
662
In general, the way that a handler works is that when an error occurs that matches condition, the SQL-procedure-statement executes. When the SQL-procedure-statement completes, DB2 performs the action that is indicated by handler-type. Types of handlers: The handler type determines what happens after the completion of the SQL-procedure-statement. You can declare the handler type to be either CONTINUE or EXIT: CONTINUE Specifies that after SQL-procedure-statement completes, execution continues with the statement after the statement that caused the error. EXIT Specifies that after SQL-procedure-statement completes, execution continues at the end of the compound statement that contains the handler. Example: CONTINUE handler: This handler sets flag at_end when no more rows satisfy a query. The handler then causes execution to continue after the statement that returned no rows.
DECLARE CONTINUE HANDLER FOR NOT FOUND SET at_end=1;
663
Example: EXIT handler: This handler places the string Table does not exist into output parameter OUT_BUFFER when condition NO_TABLE occurs. NO_TABLE is previously declared as SQLSTATE 42704 (name is an undefined name). The handler then causes the SQL procedure to exit the compound statement in which the handler is declared.
DECLARE NO_TABLE CONDITION FOR 42704; . . . DECLARE EXIT HANDLER FOR NO_TABLE SET OUT_BUFFER=Table does not exist;
Referencing SQLCODE and SQLSTATE in a handler: When an SQL error or warning occurs in an SQL procedure, you might want a handler to reference the SQLCODE or SQLSTATE value and assign the value to an output parameter to be passed back to the caller. Before you can reference SQLCODE or SQLSTATE values in a handler, you must declare the SQLCODE and SQLSTATE as SQL variables. The definitions are:
DECLARE SQLCODE INTEGER; DECLARE SQLSTATE CHAR(5);
If you want to pass the SQLCODE or SQLSTATE values to the caller, your SQL procedure definition needs to include output parameters for those values. After an error occurs, and before control returns to the caller, you can assign the value of SQLCODE or SQLSTATE to the corresponding output parameter. Example: Assigning SQLCODE to output parameter: Include assignment statements in an SQLEXCEPTION handler to assign the SQLCODE value to an output parameter:
CREATE PROCEDURE UPDATESALARY1 (IN EMPNUMBR CHAR(6), OUT SQLCPARM INTEGER) LANGUAGE SQL . . . BEGIN: DECLARE SQLCODE INTEGER; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION . SET SQLCPARM = SQLCODE; . .
Every statement in an SQL procedure sets the SQLCODE and SQLSTATE. Therefore, if you need to preserve SQLCODE or SQLSTATE values after a statement executes, use a simple assignment statement to assign the SQLCODE and SQLSTATE values to other variables. A statement like the following one does not preserve SQLCODE:
IF (1=1) THEN SET SQLCDE = SQLCODE;
| | | | | | | |
Because the IF statement is true, the SQLCODE value is reset to 0, and you lose the previous SQLCODE value. Using GET DIAGNOSTICS in a handler: You can include a GET DIAGNOSTICS statement in a handler to retrieve error or warning information. If you include GET DIAGNOSTICS, it must be the first statement that is specified in the handler. Example: Using GET DIAGNOSTICS to retrieve message text: Suppose that you create an SQL procedure, named divide1, that computes the result of the division of two integers. You include GET DIAGNOSTICS to return the text of the division error message as an output parameter:
664
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CREATE PROCEDURE divide1 (IN numerator INTEGER, IN denominator INTEGER, OUT divide_result INTEGER, OUT divide_error VARCHAR(70)) LANGUAGE SQL BEGIN DECLARE CONTINUE HANDLER FOR SQLEXCEPTION GET DIAGNOSTICS CONDITION 1 divide_error = MESSAGE_TEXT; SET divide_result = numerator / denominator; END
665
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Using the SIGNAL statement in an SQL procedure: You can use the SIGNAL statement anywhere within an SQL procedure. The following example uses an ORDERS table and a CUSTOMERS table that are defined in the following way:
CREATE TABLE ORDERS (ORDERNO INTEGER NOT NULL, PARTNO INTEGER NOT NULL, ORDER_DATE DATE DEFAULT, CUSTNO INTEGER NOT NULL, QUANTITY SMALLINT NOT NULL, CONSTRAINT REF_CUSTNO FOREIGN KEY (CUSTNO) REFERENCES CUSTOMERS (CUSTNO) ON DELETE RESTRICT, PRIMARY KEY (ORDERNO,PARTNO)); CREATE TABLE CUSTOMERS (CUSTNO INTEGER NOT NULL, CUSTNAME VARCHAR(30), CUSTADDR VARCHAR(80), PRIMARY KEY (CUSTNO));
Example: Using SIGNAL to set message text: Suppose that you have an SQL procedure for an order system that signals an application error when a customer number is not known to the application. The ORDERS table has a foreign key to the CUSTOMERS table, which requires that the CUSTNO exist in the CUSTOMERS table before an order can be inserted:
CREATE PROCEDURE submit_order (IN ONUM INTEGER, IN PNUM INTEGER, IN CNUM INTEGER, IN QNUM INTEGER) LANGUAGE SQL MODIFIES SQL DATA BEGIN DECLARE EXIT HANDLER FOR SQLSTATE VALUE 23503 SIGNAL SQLSTATE 75002 SET MESSAGE_TEXT = Customer number is not known; INSERT INTO ORDERS (ORDERNO, PARTNO, CUSTNO, QUANTITY) VALUES (ONUM, PNUM, CNUM, QNUM); END
In this example, the SIGNAL statement is in the handler. However, you can use the SIGNAL statement to invoke a handler when a condition occurs that will result in an error; see the example in Using the RESIGNAL statement in a handler. Using the RESIGNAL statement in a handler: You can use a RESIGNAL statement to assign an SQLSTATE value (to the condition that activated the handler) that is different from the SQLSTATE value that DB2 defined for that condition. Example: Using RESIGNAL to set an SQLSTATE value: Suppose that you create an SQL procedure, named divide2, that computes the result of the division of two integers. You include SIGNAL to invoke the handler with an overflow condition that is caused by a zero divisor, and you include RESIGNAL to set a different SQLSTATE value for that overflow condition:
CREATE PROCEDURE divide2 (IN numerator INTEGER, IN denominator INTEGER, OUT divide_result INTEGER) LANGUAGE SQL BEGIN DECLARE overflow CONDITION FOR SQLSTATE 22003; DECLARE CONTINUE HANDLER FOR overflow RESIGNAL SQLSTATE 22375; IF denominator = 0 THEN SIGNAL overflow;
666
| | | | |
Example: Compound statement with nested IF and WHILE statements: The following example shows a compound statement that includes an IF statement, a WHILE statement, and assignment statements. The example also shows how to declare SQL variables, cursors, and handlers for classes of error codes. The procedure receives a department number as an input parameter. A WHILE statement in the procedure body fetches the salary and bonus for each employee in the department, and uses an SQL variable to calculate a running total of employee salaries for the department. An IF statement within the WHILE statement tests for positive bonuses and increments an SQL variable that counts the number of bonuses in the department. When all employee records in the department have been processed, the FETCH statement that retrieves employee records receives SQLCODE 100. A NOT FOUND condition handler makes the search condition for
Chapter 25. Using stored procedures for client/server processing
667
the WHILE statement false, so execution of the WHILE statement ends. Assignment statements then assign the total employee salaries and the number of bonuses for the department to the output parameters for the stored procedure. If any SQL statement in the procedure body receives a negative SQLCODE, the SQLEXCEPTION handler receives control. This handler sets output parameter DEPTSALARY to NULL and ends execution of the SQL procedure. When this handler is invoked, the SQLCODE and SQLSTATE are set to 0.
CREATE PROCEDURE RETURNDEPTSALARY (IN DEPTNUMBER CHAR(3), OUT DEPTSALARY DECIMAL(15,2), OUT DEPTBONUSCNT INT) LANGUAGE SQL READS SQL DATA P1: BEGIN DECLARE EMPLOYEE_SALARY DECIMAL(9,2); DECLARE EMPLOYEE_BONUS DECIMAL(9,2); DECLARE TOTAL_SALARY DECIMAL(15,2) DEFAULT 0; DECLARE BONUS_CNT INT DEFAULT 0; DECLARE END_TABLE INT DEFAULT 0; DECLARE C1 CURSOR FOR SELECT SALARY, BONUS FROM CORPDATA.EMPLOYEE WHERE WORKDEPT = DEPTNUMBER; DECLARE CONTINUE HANDLER FOR NOT FOUND SET END_TABLE = 1; DECLARE EXIT HANDLER FOR SQLEXCEPTION SET DEPTSALARY = NULL; OPEN C1; FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS; WHILE END_TABLE = 0 DO SET TOTAL_SALARY = TOTAL_SALARY + EMPLOYEE_SALARY + EMPLOYEE_BONUS; IF EMPLOYEE_BONUS > 0 THEN SET BONUS_CNT = BONUS_CNT + 1; END IF; FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS; END WHILE; CLOSE C1; SET DEPTSALARY = TOTAL_SALARY; SET DEPTBONUSCNT = BONUS_CNT; END P1
Example: Compound statement with dynamic SQL statements: The following example shows a compound statement that includes dynamic SQL statements. The procedure receives a department number (P_DEPT) as an input parameter. In the compound statement, three statement strings are built, prepared, and executed: v The first statement string executes a DROP statement to ensure that the table to be created does not already exist. This table is named DEPT_deptno_T, where deptno is the value of input parameter P_DEPT. v The next statement string executes a CREATE statement to create DEPT_deptno_T. v The third statement string inserts rows for employees in department deptno into DEPT_deptno_T. Just as statement strings that are prepared in host language programs cannot contain host variables, statement strings in SQL procedures cannot contain SQL variables or stored procedure parameters. Therefore, the third statement string contains a parameter marker that represents P_DEPT. When the prepared statement is executed, parameter P_DEPT is substituted for the parameter marker.
668
CREATE PROCEDURE CREATEDEPTTABLE (IN P_DEPT CHAR(3)) LANGUAGE SQL BEGIN DECLARE STMT CHAR(1000); DECLARE MESSAGE CHAR(20); DECLARE TABLE_NAME CHAR(30); DECLARE CONTINUE HANDLER FOR SQLEXCEPTION SET MESSAGE = ok; SET TABLE_NAME = DEPT_||P_DEPT||_T; SET STMT = DROP TABLE ||TABLE_NAME; PREPARE S1 FROM STMT; EXECUTE S1; SET STMT = CREATE TABLE ||TABLE_NAME|| ( EMPNO CHAR(6) NOT NULL, || FIRSTNME VARCHAR(6) NOT NULL, || MIDINIT CHAR(1) NOT NULL, || LASTNAME CHAR(15) NOT NULL, || SALARY DECIMAL(9,2)); PREPARE S2 FROM STMT; EXECUTE S2; SET STMT = INSERT INTO ||TABLE_NAME || SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY || FROM EMPLOYEE || WHERE WORKDEPT = ?; PREPARE S3 FROM STMT; EXECUTE S3 USING P_DEPT; END
669
# # # #
To debug an SQL procedure, you must prepare and call it from a client development platform that includes the SQL Debugger feature. For additional information about IBM DB2 Development Center, see http:// publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp.
Using the DB2 UDB for z/OS SQL procedure processor to prepare an SQL procedure
The SQL procedure processor, DSNTPSMP, is a REXX stored procedure that you can use to prepare an SQL procedure for execution. You can also use DSNTPSMP to perform selected steps in the preparation process or delete an existing SQL procedure. DSNTPSMP is the only preparation method that supports the SQL Debugger. DSNTPSMP requires that the default EBCDIC CCSID that is used by DB2 also be compatible with the C compiler. Using an incompatible CCSID results in compile-time errors. Examples of incompatible CCSIDs include 290, 930, 1026, and 1155. The following sections contain information about invoking DSNTPSMP. Environment for calling and running DSNTPSMP: You can invoke DSNTPSMP only through an SQL CALL statement in an application program or through IBM DB2 Development Center. Before you can run DSNTPSMP, you need to perform the following steps to set up the DSNTPSMP environment: 1. Install DB2 UDB for z/OS REXX Language Support feature. Contact your IBM service representative for more information. 2. If you plan to call DSNTPSMP directly, write and prepare an application program that executes an SQL CALL statement for DSNTPSMP. See Invoking DSNTPSMP in an application program on page 673 for more information. If you plan to invoke DSNTPSMP through the IBM DB2 Development Center, see the following URL for information about installing and using the IBM DB2 Development Center. #
http://www.redbooks.ibm.com/abstracts/sg247083.html
| | # # # #
3. Set up a WLM environment in which to run DSNTPSMP. See Part 5 (Volume 2) of DB2 Administration Guide for general information about setting up WLM application environments for stored procedures and Setting up a WLM application environment for DSNTPSMP for specific information for DSNTPSMP. | | | | | | | | Setting up a WLM application environment for DSNTPSMP: You must run DSNTPSMP in a WLM-established stored procedures address space. You should run only DSNTPSMP in that address space, and you must limit the address space to run only one task concurrently (see the first note for Figure 203 on page 671 for information regarding NUMTCB). Figure 203 on page 671 shows sample JCL for a startup procedure for the address space in which DSNTPSMP runs.
670
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
//DSN8WLMP PROC DB2SSN=DSN,NUMTCB=1,APPLENV=WLMTPSMP 1 //* //WLMTPSMP EXEC PGM=DSNX9WLM,TIME=1440, 2 // PARM=&DB2SSN,&NUMTCB,&APPLENV, // REGION=0M,DYNAMNBR=10 //STEPLIB DD DISP=SHR,DSN=DSN810.SDSNEXIT 3 // DD DISP=SHR,DSN=DSN810.SDSNLOAD // DD DISP=SHR,DSN=CBC.SCCNCMP // DD DISP=SHR,DSN=CEE.SCEERUN //SYSEXEC DD DISP=SHR,DSN=DSN810.SDSNCLST 4 //SYSTSPRT DD SYSOUT=A //CEEDUMP DD SYSOUT=A //SYSABEND DD DUMMY //* //SQLDBRM DD DISP=SHR,DSN=DSN810.DBRMLIB.DATA 5 //SQLCSRC DD DISP=SHR,DSN=USER.PSMLIB.DATA 6 //SQLLMOD DD DISP=SHR,DSN=DSN810.RUNLIB.LOAD 7 //SQLLIBC DD DISP=SHR,DSN=CEE.SCEEH.H 8 // DD DISP=SHR,DSN=CEE.SCEEH.SYS.H //SQLLIBL DD DISP=SHR,DSN=CEE.SCEELKED 9 // DD DISP=SHR,DSN=DSN810.SDSNLOAD //SYSMSGS DD DISP=SHR,DSN=CEE.SCEEMSGP(EDCPMSGE) 10 //* DSNTPSMP Configuration File - CFGTPSMP (optional) //* A site-provided sequential dataset or member, used to //* define customized operation of DSNTPSMP in this APPLENV //* //*CFGTPSMP DD DISP=SHR,DSN= 11 //* //SQLSRC DD UNIT=SYSALLDA,SPACE=(800,(20,20)), 12 // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLPRINT DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=VB,LRECL=137,BLKSIZE=882) //SQLTERM DD UNIT=SYSALLDA,SPACE=(4000,(20,20)), // DCB=(RECFM=VB,LRECL=137,BLKSIZE=882) //SQLOUT DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLCPRT DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=VB,LRECL=137,BLKSIZE=882) //SQLUT1 DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLUT2 DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLCIN DD UNIT=SYSALLDA,SPACE=(8000,(20,20)) //SQLLIN DD UNIT=SYSALLDA,SPACE=(3200,(30,30)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SYSMOD DD UNIT=SYSALLDA,SPACE=(8000,(20,20)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLDUMMY DD DUMMY Figure 203. Startup procedure for a WLM address space in which DSNTPSMP runs
Notes to Figure 203: 1 APPLENV specifies the application environment in which DSNTPSMP runs. To ensure that DSNTPSMP always uses the correct data sets and parameters for preparing each SQL procedure, you can set up different application environments for preparing different types of SQL procedures. For example, if all payroll applications use the same set of data sets during program preparation, you could set up an application environment called PAYROLL for preparing only payroll applications. The startup procedure for PAYROLL would point to the data sets that are used for payroll applications. DB2SSN specifies the DB2 subsystem name.
Chapter 25. Using stored procedures for client/server processing
671
| | | | | | | | | | | | | | | | | | | | | | | | | | # # # # | | | | | | | | | | | | | | | | | | 3 4 5 6 2
NUMTCB specifies the number of programs that can run concurrently in the address space. You should always set NUMTCB to 1 to ensure that executions of DSNTPSMP occur serially. WLMTPSMP specifies the address space in which DSNTPSMP runs. DYNAMNBR allows for dynamic allocation. STEPLIB specifies the Language Environment run-time library that DSNTPSMP uses when it runs. SYSEXEC specifies the library that contains DSNTPSMP. SQLDBRM specifies the library into which DSNTPSMP puts the DBRM that it generates when it precompiles your SQL procedure. SQLCSRC specifies the library into which DSNTPSMP puts the C source code that it generates from the SQL procedure source code. This data set should have a logical record length of 80. SQLLMOD specifies the library into which DSNTPSMP puts the load module that it generates when it compiles and link-edits your SQL procedure. SQLLIBC specifies the library that contains standard C header files. This library is used during compilation of the generated C program. SQLLIBL specifies the following libraries, which DSNTPSMP uses when it link-edits the SQL procedure: v Language Environment link-edit library v DB2 load library SYSMSGS specifies the library that contains messages that are used by the C prelink-edit utility. CFGTPSMP specifies an optional data set that you can use to customize DSNTPSMP, including specifying the compiler level. For details on all of the options that you can set in this file and how to set them, see the DSNTPSMP CLIST comments. The DD statements that follow describe work file data sets that are used by DSNTPSMP.
10 11
12
Authorizations to execute DSNTPSMP: You must have the following authorizations to invoke DSNTPSMP: v Procedure privilege to run application programs that invoke the stored procedure: EXECUTE ON PROCEDURE SYSPROC.DSNTPSMP v Collection privilege to use BIND to create packages in the specified collection: CREATE ON COLLECTION collection-id You can use an asterisk (*) as the identifier for a collection. v Package privilege to use BIND or REBIND to bind packages in the specified collection: BIND ON PACKAGE collection-id.* v System privilege to use BIND with the ADD option to create packages and plans: BINDADD v Schema privilege to create, alter, or drop stored procedures in the specified schema:
672
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CREATEIN, ALTERIN, DROPIN ON SCHEMA schema-name The BUILDOWNER authorization id must have the CREATEIN privilege on the schema. You can use an asterisk (*) as the identifier for a schema. v Table privileges to select or delete from, insert into, or update these tables: SELECT ON TABLE SYSIBM.SYSROUTINES SELECT ON TABLE SYSIBM.SYSPARMS SELECT, INSERT, UPDATE, DELETE ON TABLE SYSIBM.SYSROUTINES_SRC SELECT, INSERT, UPDATE, DELETE ON TABLE SYSIBM.SYSROUTINES_OPTS ALL ON TABLE SYSIBM.SYSPSMOUT In addition, the authorizations must include any privileges required for the SQL statements that are contained within the SQL procedure-body. These privileges must be associated with the OWNER authorization-id that is specified in your bind options. The default owner is the user that is invoking DSNTPSMP. Invoking DSNTPSMP in an application program: In an application program, you can invoke DSNTPSMP through an SQL CALL statement. To prepare the program that calls DSNTPSMP, you need to precompile, compile, and link-edit the application program as usual, and then bind a package for that program. Figure 204 and Figure 205 shows the syntax of invoking DSNTPSMP through the SQL CALL statement:
SQL-procedure-source empty-string ,
source-data-set-name empty-string
build-owner empty-string
return-code )
, option
| | | | | | | | | |
Note: You must specify: v The DSNTPSMP parameters in the order listed v The empty string if an optional parameter is not required for the function v The options in the order: bind, compiler, precompiler, prelink, and link The DSNTPSMP parameters are:
Chapter 25. Using stored procedures for client/server processing
673
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
function A VARCHAR(20) input parameter that identifies the task that you want DSNTPSMP to perform. The tasks are: BUILD Creates the following objects for an SQL procedure: v A DBRM, in the data set that DD name SQLDBRM points to v A load module, in the data set that DD name SQLLMOD points to v The C language source code for the SQL procedure, in the data set that DD name SQLCSRC points to v The stored procedure package v The stored procedure definition The following input parameters are required for the BUILD function: SQL-procedure name SQL-procedure-source or source-data-set-name If you choose the BUILD function, and an SQL procedure with name SQL-procedure-name already exists, DSNTPSMP issues an error message and terminates. BUILD_DEBUG Creates the following objects for an SQL procedure and includes the preparation necessary to debug the SQL procedure with the SQL Debugger: v A DBRM, in the data set that DD name SQLDBRM points to v A load module, in the data set that DD name SQLLMOD points to v The C language source code for the SQL procedure, in the data set that DD name SQLCSRC points to v The stored procedure package v The stored procedure definition The following input parameters are required for the BUILD_DEBUG function: SQL-procedure name SQL-procedure-source or source-data-set-name If you choose the BUILD_DEBUG function, and an SQL procedure with name SQL-procedure-name already exists, DSNTPSMP issues an error message and terminates. REBUILD Replaces all objects that were created by the BUILD function for an SQL procedure, if it exists, otherwise creates those objects. The following input parameters are required for the REBUILD function: SQL-procedure name SQL-procedure-source or source-data-set-name REBUILD_DEBUG Replaces all objects that were created by the BUILD_DEBUG function for an SQL procedure, if it exists, otherwise creates those objects, and includes the preparation necessary to debug the SQL procedure with the SQL Debugger. The following input parameters are required for the REBUILD_DEBUG function: SQL-procedure name
674
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SQL-procedure-source or source-data-set-name REBIND Binds the SQL procedure package for an existing SQL procedure. The following input parameter is required for the REBIND function: SQL-procedure name DESTROY Deletes the following objects for an existing SQL procedure: v The DBRM, from the data set that DD name SQLDBRM points to v The load module, from the data set that DD name SQLLMOD points to v The C language source code for the SQL procedure, from the data set that DD name SQLCSRC points to v The stored procedure package v The stored procedure definition The following input parameter is required for the DESTROY function: SQL-procedure name ALTER Updates the registration for an existing SQL procedure. The following input parameters are required for the ALTER function: SQL-procedure name alter-statement ALTER_REBUILD Updates an existing SQL procedure. The following input parameters are required for the ALTER_REBUILD function: SQL-procedure name SQL-procedure-source or source-data-set-name ALTER_REBUILD_DEBUG Updates an existing SQL procedure, and includes the preparation necessary to debug the SQL procedure with the SQL Debugger. The following input parameters are required for the ALTER_REBUILD_DEBUG function: SQL-procedure name SQL-procedure-source or source-data-set-name ALTER_REBIND Updates the registration and binds the SQL package for an existing SQL procedure. The following input parameters are required for the ALTER_REBIND function: SQL-procedure name alter-statement QUERYLEVEL Obtains the interface level of the build utility invoked. No other input is required. SQL-procedure-name A VARCHAR(261) input parameter that specifies the SQL procedure name. The name can be qualified or unqualified. The name must match the procedure name that is specified within the CREATE PROCEDURE statement that is
Chapter 25. Using stored procedures for client/server processing
675
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
provided in SQL-procedure-source or that is obtained from source-data-set-name. In addition, the name must match the procedure name that is specified within the ALTER PROCEDURE statement that is provided in alter-statement. Do not mix qualified and unqualified references. SQL-procedure-source A CLOB(2M) input parameter that contains the CREATE PROCEDURE statement for the SQL procedure. If you specify an empty string for this parameter, you need to specify the name source-data-set-name of a data set that contains the SQL procedure source code. bind-options A VARCHAR(1024) input parameter that contains the options that you want to specify for binding the SQL procedure package. Do not specify the MEMBER or LIBRARY option for the DB2 BIND PACKAGE command. For a list of valid bind options for the DB2 BIND PACKAGE command, see Part 3 of DB2 Command Reference. compiler-options A VARCHAR(255) input parameter that contains the options that you want to specify for compiling the C language program that DB2 generates for the SQL procedure. For a list of valid compiler options, see z/OS C/C++ User's Guide. precompiler-options A VARCHAR(255) input parameter that contains the options that you want to specify for precompiling the C language program that DB2 generates for the SQL procedure. Do not specify the HOST option. For a list of valid precompiler options, see Table 64 on page 484. prelink-options A VARCHAR(255) input parameter that contains the options that you want to specify for prelinking the C language program that DB2 generates for the SQL procedure. For a list of valid prelink options, see z/OS C/C++ User's Guide. link-options A VARCHAR(255) input parameter that contains the options that you want to specify for linking the C language program that DB2 generates for the SQL procedure. For a list of valid link options, see z/OS MVS: Program Management User's Guide and Reference. alter-statement A VARCHAR(32672) input parameter that contains the SQL ALTER PROCEDURE statement to process with the ALTER or ALTER_REBIND function. source-data-set-name A VARCHAR(80) input parameter that contains the name of a z/OS sequential data set or partitioned data set member that contains the source code for the SQL procedure. If you specify an empty string for this parameter, you need to provide the SQL procedure source code in SQL-procedure-source. build-owner A VARCHAR(130) input parameter that contains the SQL identifier to serve as the build owner for newly created SQL stored procedures. When this parameter is not specified, the value defaults to the value in the CURRENT SQLID special register when the build utility is invoked.
676
| | | | | | | | | | | | | | | | | | | | | | | | | | | | |
build-utility A VARCHAR(255) input parameter that contains the name of the build utility that is invoked. The qualified form of the name is suggested, for example, SYSPROC.DSNTPSMP. return-code A VARCHAR(255) output parameter in which DB2 puts the return code from the DSNTPSMP invocation. The values are: 0 4 Successful invocation. The calling application can optionally retrieve the result set and then issue the required SQL COMMIT statement. Successful invocation, but warnings occurred. The calling application should retrieve the warning messages in the result set and then issue the required SQL COMMIT statement. Failed invocation. The calling application should retrieve the error messages in the result set and then issue the required SQL ROLLBACK statement. Failed invocation with severe errors. The calling application should retrieve the error messages in the result set and then issue the required SQL ROLLBACK statement. To view error messages that are not in the result set, see the job log of the address space for the DSNTPSMP execution. 1.20 Level of DSNTPSMP when request is QUERYLEVEL. The calling application can retrieve the result set for additional information about the release and service level and then issue the required SQL COMMIT statement. Result set that DSNTPSMP returns: DSNTPSMP returns one result set that contains messages and listings. You can write your client program to retrieve information from this result set. This technique is shown in Writing a DB2 UDB for z/OS client program or SQL procedure to receive result sets on page 708. Each row of the result set contains the following information: Processing step The step in the function process to which the message applies. ddname The ddname of the data set that contains the message. Sequence number The sequence number of a line of message text within a message. Message A line of message text. Rows in the message result set are ordered by processing step, ddname, and sequence number. Completing the requested DSNTPSMP action: The calling application must issue either an SQL COMMIT statement or an SQL ROLLBACK statement after the DSNTPSMP request. A return value of 0 or 4 requires the COMMIT statement. Any other return value requires the ROLLBACK statement. You must process the result set before issuing the COMMIT or ROLLBACK statement. A QUERYLEVEL request must be followed by the COMMIT statement.
Chapter 25. Using stored procedures for client/server processing
999
| | | | | | |
677
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Examples of DSNTPSMP invocation: The following examples illustrate invoking the BUILD, DESTROY, REBUILD, and REBIND functions of DSNTPSMP. DSNTPSMP BUILD function: Call DSNTPSMP to build an SQL procedure. The information that DSNTPSMP needs is listed in Table 80:
Table 80. The functions DSNTPSMP needs to BUILD an SQL stored procedure Function SQL procedure name Source location Bind options Compiler options Precompiler options Prelink options Link options Build utility Return value BUILD MYSCHEMA.SQLPROC String in CLOB host variable procsrc VALIDATE(BIND) SOURCE, LIST, LONGNAME, RENT SOURCE, XREF, STDSQL(NO) None specified AMODE=31, RMODE=ANY, MAP, RENT SYSPROC.DSNTPSMP String returned in varying-length host variable returnval
DSNTPSMP DESTROY function: Call DSNTPSMP to delete an SQL procedure definition and the associated load module. The information that DSNTPMSP needs is listed in Table 81:
Table 81. The functions DSNTPSMP needs to DESTROY an SQL stored procedure Function SQL procedure name Return value DESTROY MYSCHEMA.OLDPROC String returned in varying-length host variable returnval
DSNTPSMP REBUILD function: Call DSNTPSMP to recreate an existing SQL procedure. The information that DSNTPMSP needs is listed in Table 82:
Table 82. The functions DSNTPSMP needs to REBUILD an SQL stored procedure Function SQL procedure name REBUILD MYSCHEMA.SQLPROC
678
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 82. The functions DSNTPSMP needs to REBUILD an SQL stored procedure (continued) Function Bind options Compiler options Precompiler options Prelink options Link options Source data set name Return value REBUILD VALIDATE(BIND) SOURCE, LIST, LONGNAME, RENT SOURCE, XREF, STDSQL(NO) None specified AMODE=31, RMODE=ANY, MAP, RENT Member PROCSRC of partitioned data set DSN810.SDSNSAMP String returned in varying-length host variable returnval
If you want to recreate an existing SQL procedure for debugging with the SQL Debugger, use the following CALL statement, which includes the REBUILD_DEBUG function:
EXEC SQL CALL SYSPROC.DSNTPSMP(REBUILD_DEBUG,MYSCHEMA.SQLPROC,, VALIDATE(BIND), SOURCE,LIST,LONGNAME,RENT, SOURCE,XREF,STDSQL(NO), , AMODE=31,RMODE=ANY,MAP,RENT, ,DSN810.SDSNSAMP(PROCSRC),,, :returnval);
DSNTPSMP REBIND function: Call DSNTPSMP to rebind the package for an existing SQL procedure. The information that DSNTPMSP needs is listed in Table 83:
Table 83. The functions DSNTPSMP needs to REBIND an SQL stored procedure Function SQL procedure name Bind options Return value REBIND MYSCHEMA.SQLPROC VALIDATE(RUN), ISOLATION(RR) String returned in varying-length host variable returnval
679
| | |
# # # # # # # # # # # # #
Purpose Precompiles, compiles, prelink-edits, and link-edits an SQL procedure Invokes JCL procedure DSNHSQL to prepare SQL procedure DSN8ES1 for execution
680
Table 84. SQL procedure samples shipped with DB2 (continued) Member that contains source code DSN8ES1
Purpose A stored procedure that accepts a department number as input and returns a result set that contains salary information for each employee in that department Prepares client program DSN8ED3 for execution Calls SQL procedure DSN8ES1 A stored procedure that accepts one input parameter and returns two output parameters. The input parameter specifies a bonus to be awarded to managers. The SQL procedure updates the BONUS column of DSN810.SDSNSAMP. If no SQL error occurs when the SQL procedure runs, the first output parameter contains the total of all bonuses awarded to managers and the second output parameter contains a null value. If an SQL error occurs, the second output parameter contains an SQLCODE. Calls the SQL procedure processor, DSNTPSMP, to prepare DSN8ES2 for execution A sample startup procedure for the WLM-established stored procedures address space in which DSNTPSMP runs Calls SQL procedure DSN8ES2 Prepares and executes programs DSN8ED4 and DSN8ED5. DSNTEJ65 uses DSNTPSMP, the SQL procedure processor, which requires that the default EBCDIC CCSID that is used by DB2 also be compatible with the C compiler. Do not run DSNTEJ65 if the default EBCDIC CCSID for DB2 is not compatible with the C compiler. Examples of incompatible CCSIDs include 290, 930, 1026, and 1155.
DSN8ED4 DSN8WLMP
DSN8ED5 DSNTEJ65
# # # # # # #
DSNTIJSD JCL job
Prepares a DB2 UDB for z/OS server for operation with the SQL Debugger
| |
681
Use either of these methods to execute the CALL statement: Execute the CALL statement statically. Use an escape clause in an ODBC application to pass the CALL statement to DB2. v Use any of the DB2 attachment facilities.
where :EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, and :CODE are host variables that you have declared earlier in your application program. Your CALL statement might vary from the preceding statement in the following ways: v Instead of passing each of the employee and project parameters separately, you could pass them together as a host structure. For example, assume that you define a host structure like this:
struct { char EMP[7]; char PRJ[7]; short ACT; short EMT; char EMS[11]; char EME[11]; } empstruc;
v Suppose that A is in schema SCHEMAA at remote location LOCA. To access A, you could use either of these methods: Execute a CONNECT statement to LOCA, and then execute the CALL statement:
EXEC SQL CONNECT TO LOCA; EXEC SQL CALL SCHEMAA.A (:EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, :CODE);
The advantage of using the second form is that you do not need to execute a CONNECT statement. The disadvantage is that this form of the CALL statement is not portable to other operating systems. If your program executes the ASSOCIATE LOCATORS or DESCRIBE PROCEDURE statements, you must use the same form of the procedure name on the CALL statement and on the ASSOCIATE LOCATORS or DESCRIBE PROCEDURE statement. v The preceding examples assume that none of the input parameters can have null values. To allow null values, code a statement like this:
EXEC SQL CALL A (:EMP :IEMP, :PRJ :IPRJ, :ACT :IACT, :EMT :IEMT, :EMS :IEMS, :EME :IEME, :TYPE :ITYPE, :CODE :ICODE);
where :IEMP, :IPRJ, :IACT, :IEMT, :IEMS, :IEME, :ITYPE, and :ICODE are indicator variables for the parameters.
682
v You might pass integer or character string constants or the null value to the stored procedure, as in this example:
EXEC SQL CALL A (000130, IF1000, 90, 1.0, NULL, 1982-10-01, :TYPE, :CODE);
v You might use a host variable for the name of the stored procedure:
EXEC SQL CALL :procnm (:EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, :CODE);
Assume that the stored procedure name is A. The host variable procnm is a character variable of length 255 or less that contains the value A. You should use this technique if you do not know in advance the name of the stored procedure, but you do know the parameter list convention. v If you prefer to pass your parameters in a single structure, rather than as separate host variables, you might use this form:
EXEC SQL CALL A USING DESCRIPTOR :sqlda;
sqlda is the name of an SQLDA. One advantage of using this form is that you can change the encoding scheme of the stored procedure parameter values. For example, if the subsystem on which the stored procedure runs has an EBCDIC encoding scheme, and you want to retrieve data in ASCII CCSID 437, you can specify the desired CCSIDs for the output parameters in the SQLVAR fields of the SQLDA. This technique for overriding the CCSIDs of parameters is the same as the technique for overriding the CCSIDs of variables, which is described in Changing the CCSID for retrieved data on page 619. When you use this technique, the defined encoding scheme of the parameter must be different from the encoding scheme that you specify in the SQLDA. Otherwise, no conversion occurs. The defined encoding scheme for the parameter is the encoding scheme that you specify in the CREATE PROCEDURE statement, or the default encoding scheme for the subsystem, if you do not specify an encoding scheme in the CREATE PROCEDURE statement. v You might execute the CALL statement by using a host variable name for the stored procedure with an SQLDA:
EXEC SQL CALL :procnm USING DESCRIPTOR :sqlda;
This form gives you extra flexibility because you can use the same CALL statement to call different stored procedures with different parameter lists. Your client program must assign a stored procedure name to the host variable procnm and load the SQLDA with the parameter information before making the SQL CALL. Each of the preceding CALL statement examples uses an SQLDA. If you do not explicitly provide an SQLDA, the precompiler generates the SQLDA based on the variables in the parameter list.
683
If the stored procedure invokes user-defined functions or triggers, you need additional authorizations to execute the trigger, the user-defined function, and the user-defined function packages. For more information, see the description of the CALL statement in Chapter 5 of DB2 SQL Reference.
Linkage conventions
When an application executes the CALL statement, DB2 builds a parameter list for the stored procedure, using the parameters and values provided in the statement. DB2 obtains information about parameters from the stored procedure definition you create when you execute CREATE PROCEDURE. Parameters are defined as one of these types: IN OUT INOUT Input-only parameters, which provide values to the stored procedure Output-only parameters, which return values from the stored procedure to the calling program Input/output parameters, which provide values to or return values from the stored procedure.
If a stored procedure fails to set one or more of the output-only parameters, DB2 does not detect the error in the stored procedure. Instead, DB2 returns the output parameters to the calling program, with the values established on entry to the stored procedure. Initializing output parameters: For a stored procedure that runs locally, you do not need to initialize the values of output parameters before you call the stored procedure. However, when you call a stored procedure at a remote location, the local DB2 cannot determine whether the parameters are input (IN) or output (OUT or INOUT) parameters. Therefore, you must initialize the values of all output parameters before you call a stored procedure at a remote location. It is recommended that you initialize the length of LOB output parameters to zero. Doing so can improve your performance. DB2 supports three parameter list conventions. DB2 chooses the parameter list convention based on the value of the PARAMETER STYLE parameter in the stored procedure definition: GENERAL, GENERAL WITH NULLS, or SQL. v Use GENERAL when you do not want the calling program to pass null values for input parameters (IN or INOUT) to the stored procedure. The stored procedure must contain a variable declaration for each parameter passed in the CALL statement. Figure 206 on page 685 shows the structure of the parameter list for PARAMETER STYLE GENERAL.
684
v Use GENERAL WITH NULLS to allow the calling program to supply a null value for any parameter passed to the stored procedure. For the GENERAL WITH NULLS linkage convention, the stored procedure must do the following tasks: Declare a variable for each parameter passed in the CALL statement. Declare a null indicator structure containing an indicator variable for each parameter. On entry, examine all indicator variables associated with input parameters to determine which parameters contain null values. On exit, assign values to all indicator variables associated with output variables. An indicator variable for an output variable that returns a null value to the caller must be assigned a negative number. Otherwise, the indicator variable must be assigned the value 0. In the CALL statement, follow each parameter with its indicator variable, using one of the following forms:
host-variable :indicator-variable or host-variable INDICATOR :indicator-variable.
Figure 207 shows the structure of the parameter list for PARAMETER STYLE GENERAL WITH NULLS.
Figure 207. Parameter convention GENERAL WITH NULLS for a stored procedure
v Like GENERAL WITH NULLS, option SQL lets you supply a null value for any parameter that is passed to the stored procedure. In addition, DB2 passes input and output parameters to the stored procedure that contain this information:
Chapter 25. Using stored procedures for client/server processing
685
| | | | | | | |
The SQLSTATE that is to be returned to DB2. This is a CHAR(5) parameter that represents the SQLSTATE that is passed in to the program from the database manager. The initial value is set to 00000. Although the SQLSTATE is usually not set by the program, it can be set as the result SQLSTATE that is used to return an error or a warning. Returned values that start with anything other than 00, 01, or 02 are error conditions. Refer to DB2 Codes for more information about the SQLSTATE values that an application can generate. The qualified name of the stored procedure. This is a VARCHAR(27) value. The specific name of the stored procedure. The specific name is a VARCHAR(18) value that is the same as the unqualified name. The SQL diagnostic string that is to be returned to DB2. This is a VARCHAR(70) value. Use this area to pass descriptive information about an error or warning to the caller. SQL is not a valid linkage convention for a REXX language stored procedure. Figure 208 shows the structure of the parameter list for PARAMETER STYLE SQL.
| |
686
For these examples, assume that a COBOL application has the following parameter declarations and CALL statement:
************************************************************ * PARAMETERS FOR THE SQL STATEMENT CALL * ************************************************************ 01 V1 PIC S9(9) USAGE COMP. 01 . V2 PIC X(9). . . EXEC SQL CALL A (:V1, :V2) END-EXEC.
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
The following figures show how an assembler, C, COBOL, and PL/I stored procedure uses the GENERAL linkage convention to receive parameters. Figure 209 shows how a stored procedure in assembler language receives these parameters.
******************************************************************* * CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES * * THE GENERAL LINKAGE CONVENTION. * ******************************************************************* A CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS USING PROGAREA,R13 ******************************************************************* * BRING UP THE LANGUAGE ENVIRONMENT. * ******************************************************************* . . . ******************************************************************* * GET THE PASSED PARAMETER VALUES. THE GENERAL LINKAGE CONVENTION* * FOLLOWS THE STANDARD ASSEMBLER LINKAGE CONVENTION: * * ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS TO THE * * PARAMETERS. * ******************************************************************* L R7,0(R1) GET POINTER TO V1 MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1 . . . L MVC . . . CEETERM RC=0 ******************************************************************* * VARIABLE DECLARATIONS AND EQUATES * ******************************************************************* R1 EQU 1 REGISTER 1 R7 EQU 7 REGISTER 7 PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK LTORG , PLACE LITERAL POOL HERE PROGAREA DSECT ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART LOCV1 DS F LOCAL COPY OF PARAMETER V1 LOCV2 DS CL9 LOCAL COPY OF PARAMETER V2 . . . PROGSIZE EQU *-PROGAREA CEEDSA , CEECAA , END A MAPPING OF THE DYNAMIC SAVE AREA MAPPING OF THE COMMON ANCHOR AREA R7,4(R1) 0(9,R7),LOCV2 GET POINTER TO V2 MOVE A VALUE INTO OUTPUT VAR V2
687
Figure 210 shows how a stored procedure in the C language receives these parameters.
#pragma runopts(PLIST(OS)) #pragma options(RENT) #include <stdlib.h> #include <stdio.h> /*****************************************************************/ /* Code for a C language stored procedure that uses the */ /* GENERAL linkage convention. */ /*****************************************************************/ main(argc,argv) int argc; /* Number of parameters passed */ char *argv[]; /* Array of strings containing */ /* the parameter values */ { long int locv1; /* Local copy of V1 */ char locv2[10]; /* Local copy of V2 */ /* (null-terminated) */ . . . /***************************************************************/ /* Get the passed parameters. The GENERAL linkage convention */ /* follows the standard C language parameter passing */ /* conventions: */ /* - argc contains the number of parameters passed */ /* - argv[0] is a pointer to the stored procedure name */ /* - argv[1] to argv[n] are pointers to the n parameters */ /* in the SQL statement CALL. */ /***************************************************************/ if(argc==3) /* Should get 3 parameters: */ { /* procname, V1, V2 */ locv1 = *(int *) argv[1]; /* Get local copy of V1 */ . . . strcpy(argv[2],locv2); /* Assign a value to V2 . . . } } */
Figure 211 shows how a stored procedure in the COBOL language receives these parameters.
688
CBL RENT IDENTIFICATION DIVISION. ************************************************************ * CODE FOR A COBOL LANGUAGE STORED PROCEDURE THAT USES THE * * GENERAL LINKAGE CONVENTION. * ************************************************************ PROGRAM-ID. A. . . . DATA DIVISION. . . . LINKAGE SECTION. ************************************************************ * DECLARE THE PARAMETERS PASSED BY THE SQL STATEMENT * * CALL HERE. * ************************************************************ 01 V1 PIC S9(9) USAGE COMP. 01 V2 PIC X(9). . . . PROCEDURE DIVISION USING V1, V2. ************************************************************ * THE USING PHRASE INDICATES THAT VARIABLES V1 AND V2 * * WERE PASSED BY THE CALLING PROGRAM. * ************************************************************ . . . **************************************** * ASSIGN A VALUE TO OUTPUT VARIABLE V2 * **************************************** MOVE 123456789 TO V2.
Figure 212 shows how a stored procedure in the PL/I language receives these parameters.
*PROCESS SYSTEM(MVS); A: PROC(V1, V2) OPTIONS(MAIN NOEXECOPS REENTRANT); /***************************************************************/ /* Code for a PL/I language stored procedure that uses the */ /* GENERAL linkage convention. */ /***************************************************************/ /***************************************************************/ /* Indicate on the PROCEDURE statement that two parameters */ /* were passed by the SQL statement CALL. Then declare the */ /* parameters in the following section. */ /***************************************************************/ DCL V1 BIN FIXED(31), V2 CHAR(9); . . . V2 = 123456789;
689
/************************************************************/ /* Parameters for the SQL statement CALL */ /************************************************************/ long int v1; char v2[10]; /* Allow an extra byte for */ /* the null terminator */ /************************************************************/ /* Indicator structure */ /************************************************************/ struct indicators { short int ind1; short int ind2; } indstruc; . . . indstruc.ind1 = 0;
/* Remember to initialize the */ /* input parameters indicator*/ /* variable before executing */ /* the CALL statement */ EXEC SQL CALL B (:v1 :indstruc.ind1, :v2 :indstruc.ind2); . . .
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
The following figures show how an assembler, C, COBOL, or PL/I stored procedure uses the GENERAL WITH NULLS linkage convention to receive parameters. Figure 213 shows how a stored procedure in assembler language receives these parameters.
690
******************************************************************* * CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES * * THE GENERAL WITH NULLS LINKAGE CONVENTION. * ******************************************************************* B CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS USING PROGAREA,R13 ******************************************************************* * BRING UP THE LANGUAGE ENVIRONMENT. * ******************************************************************* . . . ******************************************************************* * GET THE PASSED PARAMETER VALUES. THE GENERAL WITH NULLS LINKAGE* * CONVENTION IS AS FOLLOWS: * * ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS. IF N * * PARAMETERS ARE PASSED, THERE ARE N+1 POINTERS. THE FIRST * * N POINTERS ARE THE ADDRESSES OF THE N PARAMETERS, JUST AS * * WITH THE GENERAL LINKAGE CONVENTION. THE N+1ST POINTER IS * * THE ADDRESS OF A LIST CONTAINING THE N INDICATOR VARIABLE * * VALUES. * ******************************************************************* L R7,0(R1) GET POINTER TO V1 MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1 L R7,8(R1) GET POINTER TO INDICATOR ARRAY MVC LOCIND(2*2),0(R7) MOVE VALUES INTO LOCAL STORAGE LH R7,LOCIND GET INDICATOR VARIABLE FOR V1 LTR R7,R7 CHECK IF IT IS NEGATIVE BM NULLIN IF SO, V1 IS NULL . . . L MVC L MVC . . . CEETERM RC=0 ******************************************************************* * VARIABLE DECLARATIONS AND EQUATES * ******************************************************************* R1 EQU 1 REGISTER 1 R7 EQU 7 REGISTER 7 PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK LTORG , PLACE LITERAL POOL HERE PROGAREA DSECT ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART LOCV1 DS F LOCAL COPY OF PARAMETER V1 LOCV2 DS CL9 LOCAL COPY OF PARAMETER V2 LOCIND DS 2H LOCAL COPY OF INDICATOR ARRAY . . . PROGSIZE EQU *-PROGAREA CEEDSA , CEECAA , END B R7,4(R1) 0(9,R7),LOCV2 R7,8(R1) 2(2,R7),=H(0) GET POINTER TO V2 MOVE A VALUE INTO OUTPUT VAR V2 GET POINTER TO INDICATOR ARRAY MOVE ZERO TO V2S INDICATOR VAR
MAPPING OF THE DYNAMIC SAVE AREA MAPPING OF THE COMMON ANCHOR AREA
Figure 214 shows how a stored procedure in the C language receives these parameters.
691
#pragma options(RENT) #pragma runopts(PLIST(OS)) #include <stdlib.h> #include <stdio.h> /*****************************************************************/ /* Code for a C language stored procedure that uses the */ /* GENERAL WITH NULLS linkage convention. */ /*****************************************************************/ main(argc,argv) int argc; /* Number of parameters passed */ char *argv[]; /* Array of strings containing */ /* the parameter values */ { long int locv1; /* Local copy of V1 */ char locv2[10]; /* Local copy of V2 */ /* (null-terminated) */ short int locind[2]; /* Local copy of indicator */ /* variable array */ short int *tempint; /* Used for receiving the */ /* indicator variable array */ . . . /***************************************************************/ /* Get the passed parameters. The GENERAL WITH NULLS linkage */ /* convention is as follows: */ /* - argc contains the number of parameters passed */ /* - argv[0] is a pointer to the stored procedure name */ /* - argv[1] to argv[n] are pointers to the n parameters */ /* in the SQL statement CALL. */ /* - argv[n+1] is a pointer to the indicator variable array */ /***************************************************************/ if(argc==4) /* Should get 4 parameters: */ { /* procname, V1, V2, */ /* indicator variable array */ locv1 = *(int *) argv[1]; /* Get local copy of V1 */ tempint = argv[3]; /* Get pointer to indicator */ /* variable array */ locind[0] = *tempint; /* Get 1st indicator variable */ locind[1] = *(++tempint); /* Get 2nd indicator variable */ if(locind[0]<0) /* If 1st indicator variable */ { /* is negative, V1 is null */ . . . } . . . strcpy(argv[2],locv2); *(++tempint) = 0; } } /* Assign a value to V2 */ /* Assign 0 to V2s indicator */ /* variable */
Figure 215 shows how a stored procedure in the COBOL language receives these parameters.
692
CBL RENT IDENTIFICATION DIVISION. ************************************************************ * CODE FOR A COBOL LANGUAGE STORED PROCEDURE THAT USES THE * * GENERAL WITH NULLS LINKAGE CONVENTION. * ************************************************************ PROGRAM-ID. B. . . . DATA DIVISION. . . . LINKAGE SECTION. ************************************************************ * DECLARE THE PARAMETERS AND THE INDICATOR ARRAY THAT * * WERE PASSED BY THE SQL STATEMENT CALL HERE. * ************************************************************ 01 V1 PIC S9(9) USAGE COMP. 01 V2 PIC X(9). * 01 INDARRAY. 10 INDVAR PIC S9(4) USAGE COMP OCCURS 2 TIMES. . . . PROCEDURE DIVISION USING V1, V2, INDARRAY. ************************************************************ * THE USING PHRASE INDICATES THAT VARIABLES V1, V2, AND * * INDARRAY WERE PASSED BY THE CALLING PROGRAM. * ************************************************************ . . . *************************** * TEST WHETHER V1 IS NULL * *************************** IF INDARRAY(1) < 0 PERFORM NULL-PROCESSING. . . . **************************************** * ASSIGN A VALUE TO OUTPUT VARIABLE V2 * * AND ITS INDICATOR VARIABLE * **************************************** MOVE 123456789 TO V2. MOVE ZERO TO INDARRAY(2).
Figure 216 shows how a stored procedure in the PL/I language receives these parameters.
693
*PROCESS SYSTEM(MVS); A: PROC(V1, V2, INDSTRUC) OPTIONS(MAIN NOEXECOPS REENTRANT); /***************************************************************/ /* Code for a PL/I language stored procedure that uses the */ /* GENERAL WITH NULLS linkage convention. */ /***************************************************************/ /***************************************************************/ /* Indicate on the PROCEDURE statement that two parameters */ /* and an indicator variable structure were passed by the SQL */ /* statement CALL. Then declare them in the following section.*/ /* For PL/I, you must declare an indicator variable structure, */ /* not an array. */ /***************************************************************/ DCL V1 BIN FIXED(31), V2 CHAR(9); DCL 01 INDSTRUC, 02 IND1 BIN FIXED(15), 02 IND2 BIN FIXED(15); . . . IF IND1 < 0 THEN CALL NULLVAL; . . . V2 = 123456789; IND2 = 0;
*/ */
| |
/* Remember to initialize the */ /* input parameters indicator*/ /* variable before executing */ /* the CALL statement */ EXEC SQL CALL B (:v1 :ind1, :v2 :ind2); . . .
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
The following figures show how an assembler, C, COBOL, or PL/I stored procedure uses the SQL linkage convention to receive parameters.
694
Figure 217 shows how a stored procedure in assembler language receives these parameters.
******************************************************************* * CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES * * THE SQL LINKAGE CONVENTION. * ******************************************************************* B CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS USING PROGAREA,R13 ******************************************************************* * BRING UP THE LANGUAGE ENVIRONMENT. * ******************************************************************* . . . ******************************************************************* * GET THE PASSED PARAMETER VALUES. THE SQL LINKAGE * * CONVENTION IS AS FOLLOWS: * * ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS. IF N * * PARAMETERS ARE PASSED, THERE ARE 2N+4 POINTERS. THE FIRST * * N POINTERS ARE THE ADDRESSES OF THE N PARAMETERS, JUST AS * * WITH THE GENERAL LINKAGE CONVENTION. THE NEXT N POINTERS ARE * * THE ADDRESSES OF THE INDICATOR VARIABLE VALUES. THE LAST * * 4 POINTERS (5, IF DBINFO IS PASSED) ARE THE ADDRESSES OF * * INFORMATION ABOUT THE STORED PROCEDURE ENVIRONMENT AND * * EXECUTION RESULTS. * ******************************************************************* L R7,0(R1) GET POINTER TO V1 MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1 L R7,8(R1) GET POINTER TO 1ST INDICATOR VARIABLE MVC LOCI1(2),0(R7) MOVE VALUE INTO LOCAL STORAGE L R7,20(R1) GET POINTER TO STORED PROCEDURE NAME MVC LOCSPNM(20),0(R7) MOVE VALUE INTO LOCAL STORAGE L R7,24(R1) GET POINTER TO DBINFO MVC LOCDBINF(DBINFLN),0(R7) * MOVE VALUE INTO LOCAL STORAGE LH R7,LOCI1 GET INDICATOR VARIABLE FOR V1 LTR R7,R7 CHECK IF IT IS NEGATIVE BM NULLIN IF SO, V1 IS NULL . . . L MVC L MVC L MVC . . . CEETERM RC=0 R7,4(R1) 0(9,R7),LOCV2 R7,12(R1) 0(2,R7),=H0 R7,16(R1) 0(5,R7),=CL5xxxxx GET POINTER TO V2 MOVE A VALUE INTO OUTPUT VAR V2 GET POINTER TO INDICATOR VAR 2 MOVE ZERO TO V2S INDICATOR VAR GET POINTER TO SQLSTATE MOVE xxxxx TO SQLSTATE
695
| |
******************************************************************* * VARIABLE DECLARATIONS AND EQUATES * ******************************************************************* R1 EQU 1 REGISTER 1 R7 EQU 7 REGISTER 7 PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK LTORG , PLACE LITERAL POOL HERE PROGAREA DSECT ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART LOCV1 DS F LOCAL COPY OF PARAMETER V1 LOCV2 DS CL9 LOCAL COPY OF PARAMETER V2 LOCI1 DS H LOCAL COPY OF INDICATOR 1 LOCI2 DS H LOCAL COPY OF INDICATOR 2 LOCSQST DS CL5 LOCAL COPY OF SQLSTATE LOCSPNM DS H,CL27 LOCAL COPY OF STORED PROC NAME LOCSPSNM DS H,CL18 LOCAL COPY OF SPECIFIC NAME LOCDIAG DS H,CL70 LOCAL COPY OF DIAGNOSTIC DATA LOCDBINF DS 0H LOCAL COPY OF DBINFO DATA DBNAMELN DS H DATABASE NAME LENGTH DBNAME DS CL128 DATABASE NAME AUTHIDLN DS H APPL AUTH ID LENGTH AUTHID DS CL128 APPL AUTH ID ASC_SBCS DS F ASCII SBCS CCSID ASC_DBCS DS F ASCII DBCS CCSID ASC_MIXD DS F ASCII MIXED CCSID EBC_SBCS DS F EBCDIC SBCS CCSID EBC_DBCS DS F EBCDIC DBCS CCSID EBC_MIXD DS F EBCDIC MIXED CCSID UNI_SBCS DS F UNICODE SBCS CCSID UNI_DBCS DS F UNICODE DBCS CCSID UNI_MIXD DS F UNICODE MIXED CCSID ENCODE DS F PROCEDURE ENCODING SCHEME RESERV0 DS CL20 RESERVED TBQUALLN DS H TABLE QUALIFIER LENGTH TBQUAL DS CL128 TABLE QUALIFIER TBNAMELN DS H TABLE NAME LENGTH TBNAME DS CL128 TABLE NAME CLNAMELN DS H COLUMN NAME LENGTH COLNAME DS CL128 COLUMN NAME RELVER DS CL8 DBMS RELEASE AND VERSION RESERV1 DS CL2 RESERVED PLATFORM DS F DBMS OPERATING SYSTEM NUMTFCOL DS H NUMBER OF TABLE FUNCTION COLS USED RESERV2 DS CL26 RESERVED TFCOLNUM DS A POINTER TO TABLE FUNCTION COL LIST APPLID DS A POINTER TO APPLICATION ID RESERV3 DS CL20 RESERVED DBINFLN EQU *-LOCDBINF LENGTH OF DBINFO . . . PROGSIZE EQU *-PROGAREA CEEDSA , CEECAA , END B
MAPPING OF THE DYNAMIC SAVE AREA MAPPING OF THE COMMON ANCHOR AREA
Figure 218 shows how a stored procedure written as a main program in the C language receives these parameters.
696
#pragma runopts(plist(os)) #include <;stdlib.h> #include <;stdio.h> main(argc,argv) int argc; char *argv[]; { int parm1; short int ind1; char p_proc[28]; char p_spec[19]; /***************************************************/ /* Assume that the SQL CALL statment included */ /* 3 input/output parameters in the parameter list.*/ /* The argv vector will contain these entries: */ /* argv[0] 1 contains load module */ /* argv[1-3] 3 input/output parms */ /* argv[4-6] 3 null indicators */ /* argv[7] 1 SQLSTATE variable */ /* argv[8] 1 qualified proc name */ /* argv[9] 1 specific proc name */ /* argv[10] 1 diagnostic string */ /* argv[11] + 1 dbinfo */ /* -----*/ /* 12 for the argc variable */ /***************************************************/ if argc<>12 { . . . /* We end up here when invoked with wrong number of parms */ }
Figure 218. An example of SQL linkage for a C stored procedure written as a main program (Part 1 of 2)
697
/***************************************************/ /* Assume the first parameter is an integer. */ /* The following code shows how to copy the integer*/ /* parameter into the application storage. */ /***************************************************/ parm1 = *(int *) argv[1]; /***************************************************/ /* We can access the null indicator for the first */ /* parameter on the SQL CALL as follows: */ /***************************************************/ ind1 = *(short int *) argv[4]; /***************************************************/ /* We can use the following expression */ /* to assign xxxxx to the SQLSTATE returned to */ /* caller on the SQL CALL statement. */ /***************************************************/ strcpy(argv[7],"xxxxx/0"); /***************************************************/ /* We obtain the value of the qualified procedure */ /* name with this expression. */ /***************************************************/ strcpy(p_proc,argv[8]); /***************************************************/ /* We obtain the value of the specific procedure */ /* name with this expression. */ /***************************************************/ strcpy(p_spec,argv[9]); /***************************************************/ /* We can use the following expression to assign */ /* yyyyyyyy to the diagnostic string returned */ /* in the SQLDA associated with the CALL statement.*/ /***************************************************/ . strcpy(argv[10],"yyyyyyyy/0"); . . }
Figure 218. An example of SQL linkage for a C stored procedure written as a main program (Part 2 of 2)
Figure 219 shows how a stored procedure written as a subprogram in the C language receives these parameters.
698
#pragma linkage(myproc,fetchable) #include <stdlib.h> #include <stdio.h> #include <sqludf.h> void myproc(*parm1 int, parm2 char[11], . . . . . . *p_ind1 short int, *p_ind2 short int, /* assume INT for PARM1 /* assume CHAR(10) parm2 /* null indicator for parm1 /* null indicator for parm2 */ */ */ */ */ */ */ */ */
p_sqlstate char[6], /* SQLSTATE returned to DB2 p_proc char[28], /* Qualified stored proc name p_spec char[19], /* Specific stored proc name p_diag char[71], /* Diagnostic string struct sqludf_dbinfo *udf_dbinfo); /* DBINFO { int l_p1; char[11] l_p2; short int l_ind1; short int l_ind2; char[6] l_sqlstate; char[28] l_proc; char[19] l_spec; char[71] l_diag; . sqludf_dbinfo *ludf_dbinfo; . . /***************************************************/ /* Copy each of the parameters in the parameter */ /* list into a local variable, just to demonstrate */ /* how the parameters can be referenced. */ /***************************************************/ l_p1 = *parm1; strcpy(l_p2,parm2); l_ind1 = *p_ind1; l_ind1 = *p_ind2; strcpy(l_sqlstate,p_sqlstate); strcpy(l_proc,p_proc); strcpy(l_spec,p_spec); strcpy(l_diag,p_diag); . memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo)); . . }
Figure 219. An example of SQL linkage for a C stored procedure written as a subprogram
Figure 220 shows how a stored procedure in the COBOL language receives these parameters.
699
| | | | | | | | | | | | | | | | | | | | |
CBL RENT .IDENTIFICATION DIVISION. . . .DATA DIVISION. . . LINKAGE SECTION. * Declare each of the parameters 01 PARM1 ... .01 PARM2 ... . . * Declare a null indicator for each parameter 01 P-IND1 PIC S9(4) USAGE COMP. .01 P-IND2 PIC S9(4) USAGE COMP. . . * Declare the SQLSTATE that can be set by stored proc 01 P-SQLSTATE PIC X(5). * Declare the qualified procedure name 01 P-PROC. 49 P-PROC-LEN PIC 9(4) USAGE BINARY. 49 P-PROC-TEXT PIC X(27). * Declare the specific procedure name 01 P-SPEC. 49 P-SPEC-LEN PIC 9(4) USAGE BINARY. 49 P-SPEC-TEXT PIC X(18). * Declare SQL diagnostic message token 01 P-DIAG. 49 P-DIAG-LEN PIC 9(4) USAGE BINARY. 49 P-DIAG-TEXT PIC X(70). ********************************************************* * Structure used for DBINFO * ********************************************************* 01 SQLUDF-DBINFO. * Location name length 05 DBNAMELEN PIC 9(4) USAGE BINARY. * Location name 05 DBNAME PIC X(128). * authorization ID length 05 AUTHIDLEN PIC 9(4) USAGE BINARY. * authorization ID 05 AUTHID PIC X(128). * environment CCSID information 05 CODEPG PIC X(48). 05 CDPG-DB2 REDEFINES CODEPG. 10 DB2-CCSIDS OCCURS 3 TIMES. 15 DB2-SBCS PIC 9(9) USAGE BINARY. 15 DB2-DBCS PIC 9(9) USAGE BINARY. 15 DB2-MIXED PIC 9(9) USAGE BINARY. 10 ENCODING-SCHEME PIC 9(9) USAGE BINARY. 10 RESERVED PIC X(20).
700
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| |
* other platform-specific deprecated CCSID structures not included here * schema name length 05 TBSCHEMALEN PIC 9(4) USAGE BINARY. * schema name 05 TBSCHEMA PIC X(128). * table name length 05 TBNAMELEN PIC 9(4) USAGE BINARY. * table name 05 TBNAME PIC X(128). * column name length 05 COLNAMELEN PIC 9(4) USAGE BINARY. * column name 05 COLNAME PIC X(128). * product information 05 VER-REL PIC X(8). * reserved 05 RESD0 PIC X(2). * platform type 05 PLATFORM PIC 9(9) USAGE BINARY. * number of entries in the TF column list array (tfcolumn, below) 05 NUMTFCOL PIC 9(4) USAGE BINARY. * reserved 05 RESD1 PIC X(26). * tfcolumn will be allocated dynamically of it is defined * otherwise this will be a null pointer 05 TFCOLUMN USAGE IS POINTER. * application identifier 05 APPL-ID USAGE IS POINTER. * reserved 05 RESD2 PIC X(20). * . . . PROCEDURE DIVISION USING PARM1, PARM2, P-IND1, P-IND2, P-SQLSTATE, P-PROC, P-SPEC, P-DIAG, SQLUDF-DBINFO. . . .
Figure 221 shows how a stored procedure in the PL/I language receives these parameters. # # # # # # # # # # # # # # # # # # # # # # # # #
*PROCESS SYSTEM(MVS); MYMAIN: PROC(PARM1, PARM2, ..., P_IND1, P_IND2, ..., P_SQLSTATE, P_PROC, P_SPEC, P_DIAG, DBINFO) OPTIONS(MAIN NOEXECOPS REENTRANT); DCL PARM1 ... .DCL PARM2 ... . . DCL .DCL . . DCL DCL /* first parameter */ /* second parameter */ */ */
P_IND1 BIN FIXED(15);/* indicator for 1st parm P_IND2 BIN FIXED(15);/* indicator for 2nd parm
P_SQLSTATE CHAR(5); /* 01 P_PROC CHAR(27) /* VARYING; DCL 01 P_SPEC CHAR(18) /* VARYING; DCL 01 P_DIAG CHAR(70) /* VARYING; DCL DBINFO PTR;
SQLSTATE to return to DB2 */ Qualified procedure name */ Specific stored proc Diagnostic string */ */
701
# |
DCL 01 SP_DBINFO BASED(DBINFO), /* Dbinfo */ 03 UDF_DBINFO_LLEN BIN FIXED(15), /* location length */ 03 UDF_DBINFO_LOC CHAR(128), /* location name */ 03 UDF_DBINFO_ALEN BIN FIXED(15), /* auth ID length */ 03 UDF_DBINFO_AUTH CHAR(128), /* authorization ID */ 03 UDF_DBINFO_CCSID, /* CCSIDs for DB2 UDB for z/OS */ 05 R1 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_ASBCS BIN FIXED(15), /* ASCII SBCS CCSID */ 05 R2 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_ADBCS BIN FIXED(15), /* ASCII DBCS CCSID */ 05 R3 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_AMIXED BIN FIXED(15), /* ASCII MIXED CCSID */ 05 R4 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_ESBCS BIN FIXED(15), /* EBCDIC SBCS CCSID */ 05 R5 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_EDBCS BIN FIXED(15), /* EBCDIC DBCS CCSID */ 05 R6 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_EMIXED BIN FIXED(15), /* EBCDIC MIXED CCSID*/ 05 R7 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_USBCS BIN FIXED(15), /* Unicode SBCS CCSID */ 05 R8 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_UDBCS BIN FIXED(15), /* Unicode DBCS CCSID */ 05 R9 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_UMIXED BIN FIXED(15), /* Unicode MIXED CCSID*/ 05 UDF_DBINFO_ENCODE BIN FIXED(31), /* SP encode scheme */ 05 UDF_DBINFO_RESERV0 CHAR(20), /* reserved */ 03 UDF_DBINFO_SLEN BIN FIXED(15), /* schema length */ 03 UDF_DBINFO_SCHEMA CHAR(128), /* schema name */ 03 UDF_DBINFO_TLEN BIN FIXED(15), /* table length */ 03 UDF_DBINFO_TABLE CHAR(128), /* table name */ 03 UDF_DBINFO_CLEN BIN FIXED(15), /* column length */ 03 UDF_DBINFO_COLUMN CHAR(128), /* column name */ 03 UDF_DBINFO_RELVER CHAR(8), /* DB2 release level */ 03 UDF_DBINFO_RESERV0 CHAR(2), /* reserved */ 03 UDF_DBINFO_PLATFORM BIN FIXED(31), /* database platform*/ 03 UDF_DBINFO_NUMTFCOL BIN FIXED(15), /* # of TF cols used*/ 03 UDF_DBINFO_RESERV1 CHAR(26), /* reserved */ 03 UDF_DBINFO_TFCOLUMN PTR, /* -> table fun col list */ 03 UDF_DBINFO_APPLID PTR, /* -> application id */ 03 UDF_DBINFO_RESERV2 CHAR(20); /* reserved */ . . .
This option is not applicable to other platforms, however. If you plan to use a C stored procedure on other platforms besides z/OS, use conditional compilation, as shown in Figure 222, to include this option only when you compile on z/OS.
#ifdef MVS #pragma runopts(PLIST(OS)) #endif -- or -#ifndef WKSTN #pragma runopts(PLIST(OS)) #endif
702
/* Setting I1 to -1 causes only */ /* a two byte area representing */ /* I1 to be passed to the */ /* stored procedure, instead of */ /* the 6000 byte area for BIGVAR*/ EXEC SQL CALL PROCX(:INTVAR, :BIGVAR INDICATOR :I1);
703
Table 85. Listing of tables of compatible data types Language Assembler C COBOL PL/I Compatible data types table Table 12 on page 153 Table 14 on page 177 Table 17 on page 211 Table 21 on page 243
# #
For LOBs, ROWIDs, VARCHARs, and locators, Table 86 shows compatible declarations for the assembler language.
Table 86. Compatible assembler language declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) Assembler declaration DS FL4
If n <= 65535: var DS 0FL4 var_length DS FL4 var_data DS CLn If n > 65535: var DS 0FL4 var_length DS FL4 var_data DS CL65535 ORG var_data+(n-65535) If n <= 65535: var DS 0FL4 var_length DS FL4 var_data DS CLn If n > 65535: var DS 0FL4 var_length DS FL4 var_data DS CL65535 ORG var_data+(n-65535) If m (=2*n) <= 65534: var DS 0FL4 var_length DS FL4 var_data DS CLm If m > 65534: var DS 0FL4 var_length DS FL4 var_data DS CL65534 ORG var_data+(m-65534) DS HL2,CL40 If PARAMETER VARCHAR NULTERM is specified or implied: char data[n+1]; If PARAMETER VARCHAR STRUCTURE is specified: struct {short len; char data[n]; } var;
CLOB(n)
DBCLOB(n)
ROWID VARCHAR(n)
704
Table 86. Compatible assembler language declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition Notes: 1. This row does not apply to VARCHAR(n) FOR BIT DATA. BIT DATA is always passed in a structured representation. Assembler declaration
For LOBs, ROWIDs, and locators, Table 87 shows compatible declarations for the C language.
Table 87. Compatible C language declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) C declaration unsigned long
struct {unsigned long length; char data[n]; } var; struct {unsigned long length; char var_data[n]; } var; struct {unsigned long length; sqldbchar data[n]; } var; struct {short int length; char data[40]; } var;
CLOB(n)
DBCLOB(n)
|
ROWID
For LOBs, ROWIDs, and locators, Table 88 shows compatible declarations for COBOL.
Table 88. Compatible COBOL declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR COBOL declaration 01 var PIC S9(9) USAGE IS BINARY.
705
Table 88. Compatible COBOL declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition BLOB(n) COBOL declaration If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC X(n). If n > 32767: 01 var. 02 var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC X(32767). 49 FILLER PIC X(32767). . . . 49 FILLER PIC X(mod(n,32767)). CLOB(n) If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC X(n). If n > 32767: 01 var. 02 var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC X(32767). 49 FILLER PIC X(32767). . . . 49 FILLER PIC X(mod(n,32767)). DBCLOB(n) If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC G(n) USAGE DISPLAY-1. If n > 32767: 01 var. 02 var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC G(32767) USAGE DISPLAY-1. 49 FILLER PIC G(32767). USAGE DISPLAY-1. . . . 49 FILLER PIC G(mod(n,32767)) USAGE DISPLAY-1.
706
Table 88. Compatible COBOL declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition ROWID COBOL declaration 01 var. 49 var-LEN PIC 9(4) USAGE COMP. 49 var-DATA PIC X(40).
For LOBs, ROWIDs, and locators, Table 89 shows compatible declarations for PL/I.
Table 89. Compatible PL/I declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) PL/I BIN FIXED(31)
If n <= 32767: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA CHAR(n); If n > 32767: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) CHAR(32767), 03 var_DATA2 CHAR(mod(n,32767));
CLOB(n)
If n <= 32767: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA CHAR(n); If n > 32767: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) CHAR(32767), 03 var_DATA2 CHAR(mod(n,32767));
707
Table 89. Compatible PL/I declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition DBCLOB(n) PL/I If n <= 16383: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA GRAPHIC(n); If n > 16383: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) GRAPHIC(16383), 03 var_DATA2 GRAPHIC(mod(n,16383)); ROWID CHAR(40) VAR
Tables of results: Each high-level language definition for stored procedure parameters supports only a single instance (a scalar value) of the parameter. There is no support for structure, array, or vector parameters. Because of this, the SQL statement CALL limits the ability of an application to return some kinds of tables. For example, an application might need to return a table that represents multiple occurrences of one or more of the parameters passed to the stored procedure. Because the SQL statement CALL cannot return more than one set of parameters, use one of the following techniques to return such a table: v Put the data that the application returns in a DB2 table. The calling program can receive the data in one of these ways: The calling program can fetch the rows from the table directly. Specify FOR FETCH ONLY or FOR READ ONLY on the SELECT statement that retrieves data from the table. A block fetch can retrieve the required data efficiently. The stored procedure can return the contents of the table as a result set. See Writing a stored procedure to return result sets to a DRDA client on page 650 and Writing a DB2 UDB for z/OS client program or SQL procedure to receive result sets for more information. v Convert tabular data to string format and return it as a character string parameter to the calling program. The calling program and the stored procedure can establish a convention for interpreting the content of the character string. For example, the SQL statement CALL can pass a 1920-byte character string parameter to a stored procedure, allowing the stored procedure to return a 24x80 screen image to the calling program.
Writing a DB2 UDB for z/OS client program or SQL procedure to receive result sets
You can write a program to receive result sets given either of the following alternatives: v For a fixed number of result sets, for which you know the contents This is the only alternative in which you can write an SQL procedure to return result sets. v For a variable number of result sets, for which you do not know the contents
708
The first alternative is simpler to write, but if you use the second alternative, you do not need to make major modifications to your client program if the stored procedure changes. The basic steps for receiving result sets are as follows: 1. Declare a locator variable for each result set that will be returned. If you do not know how many result sets will be returned, declare enough result set locators for the maximum number of result sets that might be returned. 2. Call the stored procedure and check the SQL return code. If the SQLCODE from the CALL statement is +466, the stored procedure has returned result sets. 3. Determine how many result sets the stored procedure is returning. If you already know how many result sets the stored procedure returns, you can skip this step. Use the SQL statement DESCRIBE PROCEDURE to determine the number of result sets. DESCRIBE PROCEDURE places information about the result sets in an SQLDA. Make this SQLDA large enough to hold the maximum number of result sets that the stored procedure might return. When the DESCRIBE PROCEDURE statement completes, the fields in the SQLDA contain the following values: v SQLD contains the number of result sets returned by the stored procedure. v Each SQLVAR entry gives information about a result set. In an SQLVAR entry: The SQLNAME field contains the name of the SQL cursor used by the stored procedure to return the result set. The SQLIND field contains the value -1. This indicates that no estimate of the number of rows in the result set is available. The SQLDATA field contains the value of the result set locator, which is the address of the result set. 4. Link result set locators to result sets. You can use the SQL statement ASSOCIATE LOCATORS to link result set locators to result sets. The ASSOCIATE LOCATORS statement assigns values to the result set locator variables. If you specify more locators than the number of result sets returned, DB2 ignores the extra locators. To use the ASSOCIATE LOCATORS statement, you must embed it in an application or SQL procedure. If you executed the DESCRIBE PROCEDURE statement previously, the result set locator values are in the SQLDATA fields of the SQLDA. You can copy the values from the SQLDATA fields to the result set locators manually, or you can execute the ASSOCIATE LOCATORS statement to do it for you. The stored procedure name that you specify in an ASSOCIATE LOCATORS or DESCRIBE PROCEDURE statement must match the stored procedure name in the CALL statement that returns the result sets. That is: v If the stored procedure name in ASSOCIATE LOCATORS or DESCRIBE PROCEDURE is unqualified, the stored procedure name in the CALL statement must be unqualified. v If the stored procedure name in ASSOCIATE LOCATORS or DESCRIBE PROCEDURE is qualified with a schema name, the stored procedure name in the CALL statement must be qualified with a schema name.
709
v If the stored procedure name in ASSOCIATE LOCATORS or DESCRIBE PROCEDURE is qualified with a location name and a schema name, the stored procedure name in the CALL statement must be qualified with a location name and a schema name. 5. Allocate cursors for fetching rows from the result sets. Use the SQL statement ALLOCATE CURSOR to link each result set with a cursor. Execute one ALLOCATE CURSOR statement for each result set. The cursor names can be different from the cursor names in the stored procedure. To use the ALLOCATE CURSOR statement, you must embed it in an application or SQL procedure. 6. Determine the contents of the result sets. If you already know the format of the result set, you can skip this step. Use the SQL statement DESCRIBE CURSOR to determine the format of a result set and put this information in an SQLDA. For each result set, you need an SQLDA big enough to hold descriptions of all columns in the result set. You can use DESCRIBE CURSOR only for cursors for which you executed ALLOCATE CURSOR previously. After you execute DESCRIBE CURSOR, if the cursor for the result set is declared WITH HOLD, the high-order bit of the eighth byte of field SQLDAID in the SQLDA is set to 1. 7. Fetch rows from the result sets into host variables by using the cursors that you allocated with the ALLOCATE CURSOR statements. If you executed the DESCRIBE CURSOR statement, perform these steps before you fetch the rows: a. Allocate storage for host variables and indicator variables. Use the contents of the SQLDA from the DESCRIBE CURSOR statement to determine how much storage you need for each host variable. b. Put the address of the storage for each host variable in the appropriate SQLDATA field of the SQLDA. c. Put the address of the storage for each indicator variable in the appropriate SQLIND field of the SQLDA. Fetching rows from a result set is the same as fetching rows from a table. You do not need to connect to the remote location when you execute these statements: v DESCRIBE PROCEDURE v ASSOCIATE LOCATORS v ALLOCATE CURSOR v DESCRIBE CURSOR v FETCH v CLOSE For the syntax of result set locators in each host language, see Chapter 9, Embedding SQL statements in host languages, on page 143. For the syntax of result set locators in SQL procedures, see Chapter 6 of DB2 SQL Reference. For the syntax of the ASSOCIATE LOCATORS, DESCRIBE PROCEDURE, ALLOCATE CURSOR, and DESCRIBE CURSOR statements, see Chapter 5 of DB2 SQL Reference. Figure 223 on page 711 and Figure 224 on page 712 show C language code that accomplishes each of these steps. Coding for other languages is similar. For a more
710
complete example of a C language program that receives result sets, see Examples of using stored procedures on page 1075. Figure 223 demonstrates how you receive result sets when you know how many result sets are returned and what is in each result set.
/*************************************************************/ /* Declare result set locators. For this example, */ /* assume you know that two result sets will be returned. */ /* Also, assume that you know the format of each result set. */ /*************************************************************/ EXEC SQL BEGIN DECLARE SECTION; static volatile SQL TYPE IS RESULT_SET_LOCATOR *loc1, *loc2; EXEC SQL END DECLARE SECTION; . . . /*************************************************************/ /* Call stored procedure P1. */ /* Check for SQLCODE +466, which indicates that result sets */ /* were returned. */ /*************************************************************/ EXEC SQL CALL P1(:parm1, :parm2, ...); if(SQLCODE==+466) { /*************************************************************/ /* Establish a link between each result set and its */ /* locator using the ASSOCIATE LOCATORS. */ /*************************************************************/ EXEC SQL ASSOCIATE LOCATORS (:loc1, :loc2) WITH PROCEDURE P1; . . . /*************************************************************/ /* Associate a cursor with each result set. */ /*************************************************************/ EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1; EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :loc2; /*************************************************************/ /* Fetch the result set rows into host variables. */ /*************************************************************/ while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :order_no, :cust_no; . . . } while(SQLCODE==0) { EXEC SQL FETCH C2 :order_no, :item_no, :quantity; . . . } } Figure 223. Receiving known result sets
Figure 224 on page 712 demonstrates how you receive result sets when you do not know how many result sets are returned or what is in each result set.
711
/*************************************************************/ /* Declare result set locators. For this example, */ /* assume that no more than three result sets will be */ /* returned, so declare three locators. Also, assume */ /* that you do not know the format of the result sets. */ /*************************************************************/ EXEC SQL BEGIN DECLARE SECTION; static volatile SQL TYPE IS RESULT_SET_LOCATOR *loc1, *loc2, *loc3; EXEC SQL END DECLARE SECTION; . . . Figure 224. Receiving unknown result sets (Part 1 of 3)
/*************************************************************/ /* Call stored procedure P2. */ /* Check for SQLCODE +466, which indicates that result sets */ /* were returned. */ /*************************************************************/ EXEC SQL CALL P2(:parm1, :parm2, ...); if(SQLCODE==+466) { /*************************************************************/ /* Determine how many result sets P2 returned, using the */ /* statement DESCRIBE PROCEDURE. :proc_da is an SQLDA */ /* with enough storage to accommodate up to three SQLVAR */ /* entries. */ /*************************************************************/ EXEC SQL DESCRIBE PROCEDURE P2 INTO :proc_da; . . . /*************************************************************/ /* Now that you know how many result sets were returned, */ /* establish a link between each result set and its */ /* locator using the ASSOCIATE LOCATORS. For this example, */ /* we assume that three result sets are returned. */ /*************************************************************/ EXEC SQL ASSOCIATE LOCATORS (:loc1, :loc2, :loc3) WITH PROCEDURE P2; . . . /*************************************************************/ /* Associate a cursor with each result set. */ /*************************************************************/ EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1; EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :loc2; EXEC SQL ALLOCATE C3 CURSOR FOR RESULT SET :loc3; Figure 224. Receiving unknown result sets (Part 2 of 3)
712
/*************************************************************/ /* Use the statement DESCRIBE CURSOR to determine the */ /* format of each result set. */ /*************************************************************/ EXEC SQL DESCRIBE CURSOR C1 INTO :res_da1; EXEC SQL DESCRIBE CURSOR C2 INTO :res_da2; EXEC SQL DESCRIBE CURSOR C3 INTO :res_da3; . . . /*************************************************************/ /* Assign values to the SQLDATA and SQLIND fields of the */ /* SQLDAs that you used in the DESCRIBE CURSOR statements. */ /* These values are the addresses of the host variables and */ /* indicator variables into which DB2 will put result set */ /* rows. */ /*************************************************************/ . . . /*************************************************************/ /* Fetch the result set rows into the storage areas */ /* that the SQLDAs point to. */ /*************************************************************/ while(SQLCODE==0) { EXEC SQL FETCH C1 USING :res_da1; . . . } while(SQLCODE==0) { EXEC SQL FETCH C2 USING :res_da2; . . . } while(SQLCODE==0) { EXEC SQL FETCH C3 USING :res_da3; . . . } } Figure 224. Receiving unknown result sets (Part 3 of 3)
Figure 225 on page 714 demonstrates how you can use an SQL procedure to receive result sets.
713
DECLARE RESULT1 RESULT_SET_LOCATOR VARYING; DECLARE RESULT2 RESULT_SET_LOCATOR VARYING; . . . CALL TARGETPROCEDURE(); ASSOCIATE RESULT SET LOCATORS(RESULT1,RESULT2) WITH PROCEDURE TARGETPROCEDURE; ALLOCATE RSCUR1 CURSOR FOR RESULT1; ALLOCATE RSCUR2 CURSOR FOR RESULT2; WHILE AT_END = 0 DO FETCH RSCUR1 INTO VAR1; SET TOTAL1 = TOTAL1 + VAR1; END WHILE; WHILE AT_END = 0 DO FETCH RSCUR2 INTO VAR2; SET TOTAL2 = TOTAL2 + VAR2; END WHILE; . . . Figure 225. Receiving result sets in an SQL procedure
CHARACTER(n) A string of length n, enclosed in single quotation marks. VARCHAR(n) VARCHAR(n) FOR BIT DATA GRAPHIC(n) VARGRAPHIC(n) The character G followed by a string enclosed in single quotation marks. The string within the quotation marks begins with a shift-out character (X'0E') and ends with a shift-in character (X'0F'). Between the shift-out character and shift-in character are n double-byte characters. A string of length 10, enclosed in single quotation marks. The format of the string depends on the value of field DATE FORMAT that you specify when you install DB2. See Chapter 2 of DB2 SQL Reference for valid date string formats.
DATE
714
Table 90. Parameter formats for a CALL statement in a REXX procedure (continued) SQL data type TIME REXX format A string of length 8, enclosed in single quotation marks. The format of the string depends on the value of field TIME FORMAT that you specify when you install DB2. See Chapter 2 of DB2 SQL Reference for valid time string formats. A string of length 26, enclosed in single quotation marks. The string has the format yyyy-mm-dd-hh.mm.ss.nnnnnn.
TIMESTAMP
Figure 226 on page 716 demonstrates how a REXX procedure calls the stored procedure in Figure 200 on page 655. The REXX procedure performs the following actions: v Connects to the DB2 subsystem that was specified by the REXX procedure invoker. v Calls the stored procedure to execute a DB2 command that was specified by the REXX procedure invoker. v Retrieves rows from a result set that contains the command output messages.
715
/* Get the SSID to connect to /* and the DB2 command to be /* executed /****************************************************************/ /* Set up the host command environment for SQL calls. */ /****************************************************************/ "SUBCOM DSNREXX" /* Host cmd env available? */ IF RC THEN /* No--make one S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) /****************************************************************/ /* Connect to the DB2 subsystem. */ /****************************************************************/ ADDRESS DSNREXX "CONNECT" SSID IF SQLCODE = 0 THEN CALL SQLCA PROC = COMMAND RESULTSIZE = 32703 RESULT = LEFT( ,RESULTSIZE, ) /****************************************************************/ /* Call the stored procedure that executes the DB2 command. */ /* The input variable (COMMAND) contains the DB2 command. */ /* The output variable (RESULT) will contain the return area */ /* from the IFI COMMAND call after the stored procedure */ /* executes. */ /****************************************************************/ ADDRESS DSNREXX "EXECSQL" , "CALL" PROC "(:COMMAND, :RESULT)" IF SQLCODE < 0 THEN CALL SQLCA SAY RETCODE =RETCODE SAY SQLCODE =SQLCODE SAY SQLERRMC =SQLERRMC SAY SQLERRP =SQLERRP SAY SQLERRD =SQLERRD.1,, || SQLERRD.2,, || SQLERRD.3,, || SQLERRD.4,, || SQLERRD.5,, || SQLERRD.6 SAY SQLWARN =SQLWARN.0,, || SQLWARN.1,, || SQLWARN.2,, || SQLWARN.3,, || SQLWARN.4,, || SQLWARN.5,, || SQLWARN.6,, || SQLWARN.7,, || SQLWARN.8,, || SQLWARN.9,, || SQLWARN.10 SAY SQLSTATE=SQLSTATE SAY C2X(RESULT) ""||RESULT||""
*/ */ */
*/
Figure 226. Example of a REXX procedure that calls a stored procedure (Part 1 of 3)
716
/****************************************************************/ /* Display the IFI return area in hexadecimal. */ /****************************************************************/ OFFSET = 4+1 TOTLEN = LENGTH(RESULT) DO WHILE ( OFFSET < TOTLEN ) LEN = C2D(SUBSTR(RESULT,OFFSET,2)) SAY SUBSTR(RESULT,OFFSET+4,LEN-4-1) OFFSET = OFFSET + LEN END /****************************************************************/ /* Get information about result sets returned by the */ /* stored procedure. */ /****************************************************************/ ADDRESS DSNREXX "EXECSQL DESCRIBE PROCEDURE :PROC INTO :SQLDA" IF SQLCODE = 0 THEN CALL SQLCA DO I = 1 TO SQLDA.SQLD SAY "SQLDA."I".SQLNAME ="SQLDA.I.SQLNAME";" SAY "SQLDA."I".SQLTYPE ="SQLDA.I.SQLTYPE";" SAY "SQLDA."I".SQLLOCATOR ="SQLDA.I.SQLLOCATOR";" SAY "SQLDA."I".SQLESTIMATE="SQLDA.I.SQLESTIMATE";" END I /****************************************************************/ /* Set up a cursor to retrieve the rows from the result */ /* set. */ /****************************************************************/ ADDRESS DSNREXX "EXECSQL ASSOCIATE LOCATOR (:RESULT) WITH PROCEDURE :PROC" IF SQLCODE = 0 THEN CALL SQLCA SAY RESULT ADDRESS DSNREXX "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :RESULT" IF SQLCODE = 0 THEN CALL SQLCA CURSOR = C101 ADDRESS DSNREXX "EXECSQL DESCRIBE CURSOR :CURSOR INTO :SQLDA" IF SQLCODE = 0 THEN CALL SQLCA /****************************************************************/ /* Retrieve and display the rows from the result set, which */ /* contain the command output message text. */ /****************************************************************/ DO UNTIL(SQLCODE = 0) ADDRESS DSNREXX "EXECSQL FETCH C101 INTO :SEQNO, :TEXT" IF SQLCODE = 0 THEN DO SAY TEXT END END IF SQLCODE = 0 THEN CALL SQLCA ADDRESS DSNREXX "EXECSQL CLOSE C101" IF SQLCODE = 0 THEN CALL SQLCA ADDRESS DSNREXX "EXECSQL COMMIT" IF SQLCODE = 0 THEN CALL SQLCA Figure 226. Example of a REXX procedure that calls a stored procedure (Part 2 of 3)
717
/****************************************************************/ /* Disconnect from the DB2 subsystem. */ /****************************************************************/ ADDRESS DSNREXX "DISCONNECT" IF SQLCODE = 0 THEN CALL SQLCA /****************************************************************/ /* Delete the host command environment for SQL. */ /****************************************************************/ S_RC = RXSUBCOM(DELETE,DSNREXX,DSNREXX) /* REMOVE CMD ENV */ RETURN /****************************************************************/ /* Routine to display the SQLCA */ /****************************************************************/ SQLCA: TRACE O SAY SQLCODE =SQLCODE SAY SQLERRMC =SQLERRMC SAY SQLERRP =SQLERRP SAY SQLERRD =SQLERRD.1,, || SQLERRD.2,, || SQLERRD.3,, || SQLERRD.4,, || SQLERRD.5,, || SQLERRD.6 SAY SQLWARN =SQLWARN.0,, || SQLWARN.1,, || SQLWARN.2,, || SQLWARN.3,, || SQLWARN.4,, || SQLWARN.5,, || SQLWARN.6,, || SQLWARN.7,, || SQLWARN.8,, || SQLWARN.9,, || SQLWARN.10 SAY SQLSTATE=SQLSTATE EXIT Figure 226. Example of a REXX procedure that calls a stored procedure (Part 3 of 3)
718
v DB2 Universal Database Application Development Guide: Building and Running Applications v DB2 Universal Database for iSeries SQL Programming with Host Languages A z/OS client can bind the DBRM to a remote server by specifying a location name on the command BIND PACKAGE. For example, suppose you want a client program to call a stored procedure at location LOCA. You precompile the program to produce DBRM A. Then you can use the following command to bind DBRM A into package collection COLLA at location LOCA:
BIND PACKAGE (LOCA.COLLA) MEMBER(A)
The plan for the package resides only at the client system.
| | | | | |
719
v If the stored procedure does not handle the abend condition, DB2 refreshes the Language Environment environment to recover the storage that the application uses. In most cases, the Language Environment environment does not need to restart. v If a data set is allocated to the DD name CEEDUMP in the JCL procedure that starts the stored procedures address space, Language Environment writes a small diagnostic dump to this data set. See your system administrator to obtain the dump information. See Testing a stored procedure on page 725 for techniques that you can use to diagnose the problem. # # # # v In a data sharing environment, the stored procedure is placed in STOPABN status only on the member where the abends occured. A calling program can invoke the stored procedure from other members of the data sharing group. The status on all other members is STARTED.
DB2 uses schema names from the CURRENT PATH special register for CALL statements of the following form:
CALL host-variable
2. When DB2 finds a stored procedure definition, DB2 executes that stored procedure if the following conditions are true: v The caller is authorized to execute the stored procedure. v The stored procedure has the same number of parameters as in the CALL statement. If both conditions are not true, DB2 continues to go through the list of schemas until it finds a stored procedure that meets both conditions or reaches the end of the list. 3. If DB2 cannot find a suitable stored procedure, it returns an SQL error code for the CALL statement.
720
2. In the program that invokes the stored procedure, specify the unqualified stored procedure name in the CALL statement. 3. Use the SQL path to indicate which version of the stored procedure that the client program should call. You can choose the SQL path in several ways: v If the client program is not an ODBC or JDBC application, use one of the following methods: Use the CALL procedure-name form of the CALL statement. When you bind plans or packages for the program that calls the stored procedure, bind one plan or package for each version of the stored procedure that you want to call. In the PATH bind option for each plan or package, specify the schema name of the stored procedure that you want to call. Use the CALL host-variable form of the CALL statement. In the client program, use the SET PATH statement to specify the schema name of the stored procedure that you want to call. v If the client program is an ODBC or JDBC application, choose one of the following methods: Use the SET PATH statement to specify the schema name of the stored procedure that you want to call. When you bind the stored procedure packages, specify a different collection for each stored procedure package. Use the COLLID value that you specified when defining the stored procedure to DB2. 4. When you run the client program, specify the plan or package with the PATH value that matches the schema name of the stored procedure that you want to call. For example, suppose that you want to write one program, PROGY, that calls one of two versions of a stored procedure named PROCX. The load module for both stored procedures is named SUMMOD. Each version of SUMMOD is in a different load library. The stored procedures run in different WLM environments, and the startup JCL for each WLM environment includes a STEPLIB concatenation that specifies the correct load library for the stored procedure module. First, define the two stored procedures in different schemas and different WLM environments:
CREATE PROCEDURE TEST.PROCX(IN V1 INTEGER, OUT V2 CHAR(9)) LANGUAGE C EXTERNAL NAME SUMMOD WLM ENVIRONMENT TESTENV; CREATE PROCEDURE PROD.PROCX(IN V1 INTEGER, OUT V2 CHAR(9)) LANGUAGE C EXTERNAL NAME SUMMOD WLM ENVIRONMENT PRODENV;
When you write CALL statements for PROCX in program PROGY, use the unqualified form of the stored procedure name:
CALL PROCX(V1,V2);
Bind two plans for PROGY. In one BIND statement, specify PATH(TEST). In the other BIND statement, specify PATH(PROD). To call TEST.PROCX, execute PROGY with the plan that you bound with PATH(TEST). To call PROD.PROCX, execute PROGY with the plan that you bound with PATH(PROD).
721
For REXX stored procedures, you must set NUMTCB to 1. To maximize the number of stored procedures that can run concurrently, use the following guidelines: v Set REGION size to 0 in startup procedures for the stored procedures address spaces to obtain the largest possible amount of storage below the 16MB line. v Limit storage required by application programs below the 16MB line by: Link editing programs above the line with AMODE(31) and RMODE(ANY) attributes Using the RENT and DATA(31) compiler options for COBOL programs. v Limit storage required by IBM Language Environment by using these run-time options: HEAP(,,ANY) to allocate program heap storage above the 16MB line STACK(,,ANY,) to allocate program stack storage above the 16MB line STORAGE(,,,4K) to reduce reserve storage area below the line to 4KB BELOWHEAP(4K,,) to reduce the heap storage below the line to 4KB LIBSTACK(4K,,) to reduce the library stack below the line to 4KB ALL31(ON) to indicate all programs contained in the stored procedure run with AMODE(31) and RMODE(ANY). You can list these options in the RUN OPTIONS parameter of the CREATE PROCEDURE or ALTER PROCEDURE statement, if they are not Language Environment installation defaults. For example, the RUN OPTIONS parameter could specify:
H(,,ANY),STAC(,,ANY,),STO(,,,4K),BE(4K,,),LIBS(4K,,),ALL31(ON)
For more information about creating a stored procedure definition, see Defining your stored procedure to DB2 on page 635. v If you use WLM-established address spaces for your stored procedures, assign stored procedures to WLM application environments according to guidelines that are described in Part 5 (Volume 2) of DB2 Administration Guide. # # # # # # #
722
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
of any packages that are invoked while running the stored procedure. These instances are invoked at either the same or different level of nesting under one DB2 connection or thread. In compatibility mode and enabling-new-function mode, multiple calls to the same stored procedure do not produce multiple instances of the applications. To invoke multiple instances of remote stored procedures or local stored procedures that have SQL to access a remote site, both the client and server must be in DB2 Version 8 new-function mode or later. For local stored procedures that issue remote SQL, instances of the applications are created at the remote server site regardless of whether result sets exist or are left open between calls. DB2 storage shortages and EDM POOL FULL conditions can occur if you call too many instances of a stored procedure or if you open too many cursors. If the stored procedure issues remote SQL statements to another DB2 server, these conditions can occur at both the DB2 client and at the DB2 server. To optimize storage usage, two subsystem parameters control the maximum number of stored procedures instances and the maximum number of open cursors for a thread. MAX_ST_PROC controls the maximum number of stored procedure instances that you can call within the same thread. MAX_NUM_CUR controls the maximum number of cursors that can be open by the same thread. When either of the values from these subsystem parameters is exceeded while an application is running, the CALL statement or the OPEN statement receives SQLCODE -904. The calling application for the stored procedure should close the result sets and issue commits often. Even read-only applications should perform these actions. Applications that fail to do so terminate abnormally with DB2 storage shortage and EDM POOL FULL conditions. You can set the maximum number of stored procedure instances and the maximum number of open cursors on installation panel DSNTIPX. For more information about setting the maximum number of stored procedure instances and the maximum number of open cursors per DB2 thread or connection, see the topic Routine parameters panel: DSNTIPXin DB2 Installation Guide.
723
v When a stored procedure runs in a DB2-established stored procedures address space, z/OS is not aware that the stored procedures address space is processing work for DB2. One consequence of this is that z/OS accesses RACF-protected resources using the user ID associated with the z/OS task (ssnmSPAS) for stored procedures, not the user ID of the client. v When a stored procedure runs in a WLM-established stored procedures address space, DB2 can establish a RACF environment for accessing non-DB2 resources. The authority used when the stored procedure accesses protected z/OS resources depends on the value of SECURITY in the stored procedure definition: If the value of SECURITY is DB2, the authorization ID associated with the stored procedures address space is used. If the value of SECURITY is USER, the authorization ID under which the CALL statement is executed is used. If the value of SECURITY is DEFINER, the authorization ID under which the CREATE PROCEDURE statement was executed is used. v Not all non-DB2 resources can tolerate concurrent access by multiple TCBs in the same address space. You might need to serialize the access within your application.
CICS Stored procedure applications can access CICS by one of these methods: v Stored procedure DSNACICS DSNACICS gives workstation applications a way to invoke CICS server programs while using TCP/IP or SNA as their communication protocol. The workstation applications use DB2 CONNECT to connect to a DB2 for z/OS subsystem, and then call DSNACICS to invoke the CICS server programs. v Message Queue Interface (MQI) for asynchronous execution of CICS transactions v External CICS interface (EXCI) for synchronous execution of CICS transactions v Advanced Program-to-Program Communication (APPC), using the Common Programming Interface Communications (CPI Communications) application programming interface For DB2-established address spaces, a CICS application runs as a separate unit of work from the unit of work under which the stored procedure runs. Consequently, results from CICS processing do not affect the completion of stored procedure processing. For example, a CICS transaction in a stored procedure that rolls back a unit of work does not prevent the stored procedure from committing the DB2 unit of work. Similarly, a rollback of the DB2 unit of work does not undo the successful commit of a CICS transaction. For WLM-established address spaces, if your system is running a release of CICS that uses z/OS RRS, then z/OS RRS controls commitment of all resources.
724
IMS If your system is not running a release of IMS that uses z/OS RRS, you can use one of the following methods to access DL/I data from your stored procedure: v Use the CICS EXCI interface to run a CICS transaction synchronously. That CICS transaction can, in turn, access DL/I data. v Invoke IMS transactions asynchronously using the MQI. v Use APPC through the CPI Communications application programming interface
725
3. In the JCL startup procedure for WLM-established stored procedures address space, add the data set name of the Debug Tool load library to the STEPLIB concatenation. For example, suppose that ENV1PROC is the JCL procedure for application environment WLMENV1. The modified JCL for ENV1PROC might look like this:
//DSNWLM //IEFPROC // //STEPLIB // // // PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8 EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT, PARM=&DB2SSN,&NUMTCB,&APPLENV DD DISP=SHR,DSN=DSN810.RUNLIB.LOAD DD DISP=SHR,DSN=CEE.SCEERUN DD DISP=SHR,DSN=DSN810.SDSNLOAD DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
4. On the workstation, start the VisualAge Remote Debugger daemon. This daemon waits for incoming requests from TCP/IP. 5. Call the stored procedure. When the stored procedure starts, a window that contains the debug session is displayed on the workstation. You can then execute Debug Tool commands to debug the stored procedure.
Debugging an SQL procedure or C language stored procedure with the Debug Tool and C/C++ Productivity Tools for z/OS
If you have the C/C++ Productivity Tools for z/OS installed on your workstation and the Debug Tool installed on your z/OS system, you can debug an SQL procedure or C or C++ stored procedure that runs in a WLM-established stored procedures address space. The code against which you run the debug tools is the C source program that is produced by the program preparation process for the stored procedure. For detailed information about the Debug Tool, see Debug Tool User's Guide and Reference. After you write your C++ stored procedure or SQL procedure and set up the WLM environment, follow these steps to test the stored procedure with the Distributed Debugger feature of the C/C++ Productivity Tools for z/OS and the Debug Tool: 1. When you define the stored procedure, include run-time option TEST with the suboption VADTCPIP&ipaddr in your RUN OPTIONS argument. VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that runs VisualAge C++ and is configured for TCP/IP communication with your z/OS system. ipaddr is the IP address of the workstation on which you display your debug information. For example, this RUN OPTIONS value in a stored procedure definition indicates that debug information should go to the workstation with IP address 9.63.51.17:
RUN OPTIONS POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)
2. Precompile the stored procedure. Ensure that the modified source program that is the output from the precompile step is in a permanent, catalogued data set. For an SQL procedure, the modified C source program that is the output from the second precompile step must be in a permanent, catalogued data set. 3. Compile the output from the precompile step. Specify the TEST, SOURCE, and OPT(0) compiler options. 4. In the JCL startup procedure for the stored procedures address space, add the data set name of the Debug Tool load library to the STEPLIB concatenation. For example, suppose that ENV1PROC is the JCL procedure for application environment WLMENV1. The modified JCL for ENV1PROC might look like this:
726
PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8 EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT, PARM=&DB2SSN,&NUMTCB,&APPLENV DD DISP=SHR,DSN=DSN810.RUNLIB.LOAD DD DISP=SHR,DSN=CEE.SCEERUN DD DISP=SHR,DSN=DSN810.SDSNLOAD DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
5. On the workstation, start the Distributed Debugger daemon. This daemon waits for incoming requests from TCP/IP. 6. Call the stored procedure. When the stored procedure starts, a window that contains the debug session is displayed on the workstation. You can then execute Debug Tool commands to debug the stored procedure. | |
Debugging with Debug Tool for z/OS interactively and in batch mode
You can use the Debug Tool for z/OS to test z/OS stored procedures written in any of the supported languages either interactively or in batch mode. Using Debug Tool interactively: To test a stored procedure interactively using the Debug Tool, you must have the Debug Tool installed on the z/OS system where the stored procedure runs. To debug your stored procedure using the Debug Tool, do the following: v Compile the stored procedure with option TEST. This places information in the program that the Debug Tool uses during a debugging session. v Invoke the Debug Tool. One way to do that is to specify the Language Environment run-time option TEST. The TEST option controls when and how the Debug Tool is invoked. The most convenient place to specify run-time options is in the RUN OPTIONS parameter of the CREATE PROCEDURE or ALTER PROCEDURE statement for the stored procedure. For example, you can code the TEST option using the following parameters:
TEST(ALL,*,PROMPT,JBJONES%SESSNA:)
Table 91 lists the effects each parameter has on the Debug Tool:
Table 91. Effects of the TEST option parameters on the Debug Tool Parameter value ALL Effect on the Debug Tool The Debug Tool gains control when an attention interrupt, ABEND, or program or Language Environment condition of Severity 1 and above occurs. Debug commands will be entered from the terminal. PROMPT JBJONES%SESSNA: The Debug Tool is invoked immediately after Language Environment initialization. The Debug Tool initiates a session on a workstation identified to APPC/MVS as JBJONES with a session ID of SESSNA.
v If you want to save the output from your debugging session, issue the following command:
SET LOG ON FILE dbgtool.log;
727
This command saves a log of your debugging session to a file on the workstation called dbgtool.log. This should be the first command that you enter from the terminal or include in your commands file. Using Debug Tool in batch mode: To test your stored procedure in batch mode, you must have the Debug Tool installed on the z/OS system where the stored procedure runs. To debug your stored procedure in batch mode using the Debug Tool, do the following: v Compile the stored procedure with option TEST, if you plan to use the Language Environment run-time option TEST to invoke the Debug Tool. This places information in the program that the Debug Tool uses during a debugging session. v Allocate a log data set to receive the output from the Debug Tool. Put a DD statement for the log data set in the start-up procedure for the stored procedures address space. v Enter commands in a data set that you want the Debug Tool to execute. Put a DD statement for that data set in the start-up procedure for the stored procedures address space. To define the commands data set to the Debug Tool, specify the commands data set name or DD name in the TEST run-time option. For example, to specify that the Debug Tool use the commands that are in the data set that is associated with the DD name TESTDD, include the following parameter in the TEST option:
TEST(ALL,TESTDD,PROMPT,*)
This command directs output from your debugging session to the log data set that you defined in the previous step. For example, if you defined a log data set with DD name INSPLOG in the stored procedures address space start-up procedure, the first command should be the following command:
SET LOG ON FILE INSPLOG;
v Invoke the Debug Tool. The following are two possible methods for invoking the Debug Tool: Specify the run-time option TEST. The most convenient place to do that is in the RUN OPTIONS parameter of the CREATE PROCEDURE or ALTER PROCEDURE statement for the stored procedure. Put CEETEST calls in the stored procedure source code. If you use this approach for an existing stored procedure, you must recompile, re-link, and bind it, and issue the STOP PROCEDURE and START PROCEDURE commands to reload the stored procedure. You can combine the run-time option TEST with CEETEST calls. For example, you might want to use TEST to name the commands data set but use CEETEST calls to control when the Debug Tool takes control. For more information about using the Debug Tool for z/OS, see Debug Tool User's Guide and Reference.
728
v For each MSGFILE argument, you must add a DD statement to the JCL procedure used to start the DB2 stored procedures address space. v Execute ALTER PROCEDURE with the RUN OPTIONS parameter to add the MSGFILE option to the list of run-time options for the stored procedure. v Because multiple TCBs can be active in the DB2 stored procedures address space, you must serialize I/O to the data set associated with the MSGFILE option. For example: Use the ENQ option of the MSGFILE option to serialize I/O to the message file. To prevent multiple procedures from sharing a data set, each stored procedure can specify a unique DD name with the MSGFILE option. If you debug your applications infrequently or on a DB2 test system, you can serialize I/O by temporarily running the DB2 stored procedures address space with NUMTCB=1 in the stored procedures address space start-up procedure. Ask your system administrator for assistance in doing this. These considerations also apply to a WLM stored procedures address space.
729
730
731
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
which character strings are nul-terminated, the string length can be less than or equal to the column length plus 1. If the declared length of the host variable is greater than the column length, the predicate is stage 1 but cannot be a matching predicate for an index scan. For example, assume that a host variable and an SQL column are defined as follows:
C language declaration char string_hv[15] SQL definition STRING_COL CHAR(12)
A predicate such as WHERE STRING_COL > :string_hv is not a matching predicate for an index scan because the length of string_hv is greater than the length of STRING_COL. One way to avoid an inefficient predicate using character host variables is to declare the host variable with a length that is less than or equal to the column length:
char string_hv[12]
Because this is a C language example, the host variable length could be 1 byte greater than the column length:
char string_hv[13]
For numeric comparisons, a comparison between a DECIMAL column and a float or real host variable is stage 2 if the precision of the DECIMAL column is greater than 15. For example, assume that a host variable and an SQL column are defined as follows:
C language declaration float float_hv SQL definition DECIMAL_COL DECIMAL(16,2)
A predicate such as WHERE DECIMAL_COL = :float_hv is not a matching predicate for an index scan because the length of DECIMAL_COL is greater than 15. However, if DECIMAL_COL is defined as DECIMAL(15,2), the predicate is stage 1 and indexable.
Assuming that subquery 1 and subquery 2 are the same type of subquery (either correlated or noncorrelated) and the subqueries are stage 2, DB2 evaluates the
732
subquery predicates in the order they appear in the WHERE clause. Subquery 1 rejects 10% of the total rows, and subquery 2 rejects 80% of the total rows. The predicate in subquery 1 (which is referred to as P1) is evaluated 1000 times, and the predicate in subquery 2 (which is referred to as P2) is evaluated 900 times, for a total of 1900 predicate checks. However, if the order of the subquery predicates is reversed, P2 is evaluated 1000 times, but P1 is evaluated only 200 times, for a total of 1200 predicate checks. Coding P2 before P1 appears to be more efficient if P1 and P2 take an equal amount of time to execute. However, if P1 is 100 times faster to evaluate than P2, then coding subquery 1 first might be advisable. If you notice a performance degradation, consider reordering the subqueries and monitoring the results. Consult Writing efficient subqueries on page 766 to help you understand what factors make one subquery run more slowly than another. If you are unsure, run EXPLAIN on the query with both a correlated and a noncorrelated subquery. By examining the EXPLAIN output and understanding your data distribution and SQL statements, you should be able to determine which form is more efficient. This general principle can apply to all types of predicates. However, because subquery predicates can potentially be thousands of times more processor- and I/O-intensive than all other predicates, the order of subquery predicates is particularly important. | | | Regardless of coding order, DB2 performs noncorrelated subquery predicates before correlated subquery predicates, unless the subquery is transformed into a join. Refer to DB2 predicate manipulation on page 755 to see in what order DB2 will evaluate predicates and when you can control the evaluation order.
733
STDDEV_SAMP VAR VAR_SAMP If your query involves the functions MAX or MIN, refer to One-fetch access (ACCESSTYPE=I1) on page 814 to see whether your query could take advantage of that method.
If you rewrite the predicate in the following way, DB2 can evaluate it more efficiently:
WHERE SALARY > 50000/(1 + :hv1)
In the second form, the column is by itself on one side of the operator, and all the other values are on the other side of the operator. The expression on the right is called a noncolumn expression. DB2 can evaluate many predicates with noncolumn expressions at an earlier stage of processing called stage 1, so the queries take less time to run. For more information on noncolumn expressions and stage 1 processing, see Properties of predicates on page 735. | | | | | | | |
734
| | | | | | | | | | |
query tables, when DB2 executes a dynamic query, DB2 uses the contents of applicable materialized query tables if DB2 finds a performance advantage to doing so. For information about materialized query tables, see Part 5 (Volume 2) of DB2 Administration Guide.
Effect on access paths: This section explains the effect of predicates on access paths. Because SQL allows you to express the same query in different ways, knowing how predicates affect path selection helps you write queries that access data efficiently. This section describes: v Properties of predicates v General rules about predicate evaluation on page 739 v Predicate filter factors on page 746 v DB2 predicate manipulation on page 755 v Column correlation on page 752
Properties of predicates
Predicates in a HAVING clause are not used when selecting access paths; hence, in this section the term predicate means a predicate after WHERE or ON. A predicate influences the selection of an access path because of: v Its type, as described in Predicate types on page 736 v Whether it is indexable, as described in Indexable and nonindexable predicates on page 737 v Whether it is stage 1 or stage 2 v Whether it contains a ROWID column, as described in Is direct row access possible? (PRIMARY_ACCESSTYPE = D) on page 803 There are special considerations for Predicates in the ON clause on page 738.
Chapter 26. Tuning your queries
735
Predicate definitions: Predicates are identified as: Simple or compound A compound predicate is the result of two predicates, whether simple or compound, connected together by AND or OR Boolean operators. All others are simple. Local or join Local predicates reference only one table. They are local to the table and restrict the number of rows returned for that table. Join predicates involve more than one table or correlated reference. They determine the way rows are joined from two or more tables. For examples of their use, see Interpreting access to two or more tables (join) on page 815. Boolean term Any predicate that is not contained by a compound OR predicate structure is a Boolean term. If a Boolean term is evaluated false for a particular row, the whole WHERE clause is evaluated false for that row.
Predicate types
The type of a predicate depends on its operator or syntax. The type determines what type of processing and filtering occurs when the predicate is evaluated. Table 92 shows the different predicate types.
Table 92. Definitions and examples of predicate types Type Subquery Equal Definition Any predicate that includes another SELECT statement. Any predicate that is not a subquery predicate and has an equal operator and no NOT operator. Also included are predicates of the form C1 IS NULL and C IS NOT DISTINCT FROM. Any predicate that is not a subquery predicate and has an operator in the following list: >, >=, <, <=, LIKE, or BETWEEN. A predicate of the form column IN (list of values). Any predicate that is not a subquery predicate and contains a NOT operator. Also included are predicates of the form C1 IS DISTINCT FROM. Example C1 IN (SELECT C10 FROM TABLE1) C1=100
Range
C1>100
IN-list NOT
Example: Influence of type on access paths: The following two examples show how the predicate type can influence DB2s choice of an access path. In each one, assume that a unique index I1 (C1) exists on table T1 (C1, C2), and that all values of C1 are positive integers. The following query has a range predicate:
SELECT C1, C2 FROM T1 WHERE C1 >= 0;
However, the predicate does not eliminate any rows of T1. Therefore, it could be determined during bind that a table space scan is more efficient than the index scan.
736
DB2 chooses the index access in this case because the index is highly selective on column C1.
Recommendation: To make your queries as efficient as possible, use indexable predicates in your queries and create suitable indexes on your tables. Indexable predicates allow the possible use of a matching index scan, which is often a very efficient access path.
| | | | | | | | | | | |
The predicate is not indexable because the length of the column is shorter than the length of the constant. Example: The following predicate is not stage 1:
DECCOL>34.5, where DECCOL is defined as DECIMAL(18,2)
The predicate is not stage 1 because the precision of the decimal column is greater than 15. v Whether DB2 evaluates the predicate before or after a join operation. A predicate that is evaluated after a join operation is always a stage 2 predicate. v Join sequence
737
| | | | | | | | | | |
The same predicate might be stage 1 or stage 2, depending on the join sequence. Join sequence is the order in which DB2 joins tables when it evaluates a query. The join sequence is not necessarily the same as the order in which the tables appear in the predicate. Example: This predicate might be stage 1 or stage 2:
T1.C1=T2.C1+1
If T2 is the first table in the join sequence, the predicate is stage 1, but if T1 is the first table in the join sequence, the predicate is stage 2. You can determine the join sequence by executing EXPLAIN on the query and examining the resulting plan table. See Chapter 27, Using EXPLAIN to improve SQL performance, on page 789 for details. All indexable predicates are stage 1. The predicate C1 LIKE %BC is stage 1, but is not indexable. Recommendation: Use stage 1 predicates whenever possible.
v v v v
P1 is a simple BT predicate. P2 and P3 are simple non-BT predicates. P2 OR P3 is a compound BT predicate. P1 AND (P2 OR P3) is a compound BT predicate.
Effect on access paths: In single-index processing, only Boolean term predicates are chosen for matching predicates. Hence, only indexable Boolean term predicates are candidates for matching index scans. To match index columns by predicates that are not Boolean terms, DB2 considers multiple-index access. In join operations, Boolean term predicates can reject rows at an earlier stage than can non-Boolean term predicates. Recommendation: For join operations, choose Boolean term predicates over non-Boolean term predicates whenever possible.
738
In an outer join, predicates that are evaluated after the join are stage 2 predicates. Predicates in a table expression can be evaluated before the join and can therefore be stage 1 predicates. Example: In the following statement, the predicate EDLEVEL > 100 is evaluated before the full join and is a stage 1 predicate:
SELECT * FROM (SELECT * FROM DSN8810.EMP WHERE EDLEVEL > 100) AS X FULL JOIN DSN8810.DEPT ON X.WORKDEPT = DSN8810.DEPT.DEPTNO;
For more information about join methods, see Interpreting access to two or more tables (join) on page 815.
| |
739
2. All range predicates and predicates of the form column IS NOT NULL are evaluated. 3. All other predicate types are evaluated. After both sets of rules are applied, predicates are evaluated in the order in which they appear in the query. Because you specify that order, you have some control over the order of evaluation. | | Exception: Regardless of coding order, non-correlated subqueries are evaluated before correlated subqueries, unless DB2 transforms the subquery into a join.
| |
v Tn col expr is an expression that contains a column in table Tn. The expression might be only that column. v predicate is a predicate of any type. In general, if you form a compound predicate by combining several simple predicates with OR operators, the result of the operation has the same characteristics as the simple predicate that is evaluated latest. For example, if two indexable predicates are combined with an OR operator, the result is indexable. If a stage 1 predicate and a stage 2 predicate are combined with an OR operator, the result is stage 2.
Table 93. Predicate types and processing Predicate Type COL = value Indexable? Y Y Y Y Stage 1? Y Y Y Y Notes 16 9, 11, 12, 15 20, 21 13
| #
740
Table 93. Predicate types and processing (continued) Predicate Type Indexable? Y Y Y N N Y Y Y N N Y N N N N N N N Y Y Y N N N N Y N N Y Y Y N Stage 1? Y Y Y N N Y Y Y Y Y Y Y Y N Y Y Y Y Y Y Y Y N N N Y N N Y Y Y Y 26 on page 745 22 22 5 1, 5 1, 5 2, 5 6, 9, 11, 12, 14, 15, 25 6, 9, 11, 12, 13, 14, 15 8, 11 3, 25 3 3 10 6, 7, 11, 12, 13, 14 5 17, 18 8, 11 8, 11 21 Notes 9, 11, 12, 13 13 9, 11, 12, 13, 15,23
# | # |
COL BETWEEN noncol expr1 AND noncol expr2 value BETWEEN COL1 AND COL2 COL BETWEEN COL1 AND COL2
| | | | | # |
COL BETWEEN expression1 AND expression2 COL LIKE 'pattern' COL IN (list) COL <> value COL <> noncol expr COL IS NOT NULL COL NOT BETWEEN value1 AND value2 COL NOT BETWEEN noncol expr1 AND noncol expr2 value NOT BETWEEN COL1 AND COL2 COL NOT IN (list) COL NOT LIKE ' char' COL LIKE '%char' COL LIKE '_char' COL LIKE host variable
| | |
T1.COL = T2 col expr T1.COL op T2 col expr T1.COL <> T2 col expr T1.COL1 = T1.COL2 T1.COL1 op T1.COL2 T1.COL1 <> T1.COL2 COL=(noncor subq)
COL = ANY (noncor subq) COL = ALL (noncor subq) COL op (noncor subq)
COL op ANY (noncor subq) COL op ALL (noncor subq) COL <> (noncor subq)
741
Table 93. Predicate types and processing (continued) Predicate Type Indexable? N N Y Y N N N N N N N N N N N N N N N N Y N Y N N N Y N Y N Stage 1? N N Y Y N N N N N N N N N N N N N N N Y Y Y Y N N Y Y Y Y N N N N N N N N N 4 4 22 22 22 22 22 8, 11 16 8, 11 9, 11, 12, 15 3 3 8, 11 6, 9, 11, 12, 14, 15 19 4 22 4 22 4 22 24 Notes 22
# #
COL <> ANY (noncor subq) COL <> ALL (noncor subq) COL IN (noncor subq) (COL1,...COLn) IN (noncor subq) COL NOT IN (noncor subq) (COL1,...COLn) NOT IN (noncor subq) COL = (cor subq)
COL = ANY (cor subq) COL = ALL (cor subq) COL op (cor subq)
COL op ANY (cor subq) COL op ALL (cor subq) COL <> (cor subq)
COL <> ANY (cor subq) COL <> ALL (cor subq) COL IN (cor subq) (COL1,...COLn) IN (cor subq) COL NOT IN (cor subq) (COL1,...COLn) NOT IN (cor subq)
| | | | | | | | | | | # | # | | | | # | # | # | |
COL IS DISTINCT FROM value COL IS NOT DISTINCT FROM value COL IS DISTINCT FROM noncol expr COL IS NOT DISTINCT FROM noncol expr T1.COL1 IS DISTINCT FROM T2.COL2 T1.COL1 IS NOT DISTINCT FROM T2.COL2 T1.COL1 IS DISTINCT FROM T2 col expr T1.COL1 IS NOT DISTINCT FROM T2 col expr COL IS DISTINCT FROM (noncor subq) COL IS NOT DISTINCT FROM (noncor subq) COL IS DISTINCT FROM ANY (noncor subq)
COL IS NOT DISTINCT FROM ANY (noncor subq) N COL IS DISTINCT FROM ALL (noncor subq) COL IS NOT DISTINCT FROM ALL (noncor subq) COL IS NOT DISTINCT FROM (cor subq) COL IS DISTINCT FROM ANY (cor subq) COL IS DISTINCT FROM ANY (cor subq) COL IS NOT DISTINCT FROM ANY (cor subq) COL IS DISTINCT FROM ALL (cor subq) N N N N N N N
742
Table 93. Predicate types and processing (continued) Predicate Type Indexable? N N N N N N N Stage 1? N N N N N N N 19 Notes
COL IS NOT DISTINCT FROM ALL (cor subq) EXISTS (subq) NOT EXISTS (subq) expression = value expression <> value expression op value expression op (subq)
Notes to Table 93 on page 740: 1. Indexable only if an ESCAPE character is specified and used in the LIKE predicate. For example, COL LIKE '+%char' ESCAPE '+' is indexable. 2. Indexable only if the pattern in the host variable is an indexable constant (for example, host variable='char%'). 3. If both COL1 and COL2 are from the same table, access through an index on either one is not considered for these predicates. However, the following query is an exception:
SELECT * FROM T1 A, T1 B WHERE A.C1 = B.C2;
4. 5. 6. 7. 8. 9.
| | # # # # # # #
10. 11.
By using correlation names, the query treats one table as if it were two separate tables. Therefore, indexes on columns C1 and C2 are considered for access. If the subquery has already been evaluated for a given correlation value, then the subquery might not have to be reevaluated. Not indexable or stage 1 if a field procedure exists on that column. The column on the left side of the join sequence must be in a different table from any columns on the right side of the join sequence. The tables that contain the columns in expression1 or expression2 must already have been accessed. The processing for WHERE NOT COL = value is like that for WHERE COL <> value, and so on. If noncol expr, noncol expr1, or noncol expr2 is a noncolumn expression of one of these forms, then the predicate is not indexable: v noncol expr + 0 v noncol expr - 0 v noncol expr * 1 v noncol expr / 1 v noncol expr CONCAT empty string COL, COL1, and COL2 can be the same column or different columns. The columns are in the same table. Any of the following sets of conditions make the predicate stage 2: v The first value obtained before the predicate is evaluated is DECIMAL(p,s), where p>15, and the second value obtained before the predicate is evaluated is REAL or FLOAT. v The first value obtained before the predicate is evaluated is CHAR, VARCHAR, GRAPHIC, or VARGRAPHIC, and the second value obtained before the predicate is evaluated is DATE, TIME, or TIMESTAMP.
Chapter 26. Tuning your queries
743
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # | | | | | |
| | | | | | | |
12. The predicate is stage 1 but not indexable if the first value obtained before the predicate is evaluated is CHAR or VARCHAR, the second value obtained before the predicate is evaluated is GRAPHIC or VARGRAPHIC, and the first value obtained before the predicate is evaluated is not Unicode mixed. 13. If both sides of the comparison are strings, any of the following sets of conditions makes the predicate stage 1 but not indexable: v The first value obtained before the predicate is evaluated is CHAR or VARCHAR, and the second value obtained before the predicate is evaluated is GRAPHIC or VARGRAPHIC. v Both of the following conditions are true: Both sides of the comparison are CHAR or VARCHAR, or both sides of the comparison are BINARY or VARBINARY The length the first value obtained before the predicate is evaluated is less than the length of the second value obtained before the predicate is evaluated. v Both of the following conditions are true: Both sides of the comparison are GRAPHIC or VARGRAPHIC. The length of the first value obtained before the predicate is evaluated is less than the length of the second value obtained before the predicate is evaluated. v Both of the following conditions are true: The first value obtained before the predicate is evaluated is GRAPHIC or VARGRAPHIC, and the second value obtained before the predicate is evaluated is CHAR or VARCHAR. The length of the first value obtained before the predicate is evaluated is less than the length of the second value obtained before the predicate is evaluated. 14. If both sides of the comparison are strings, but the two sides have different CCSIDs, the predicate is stage 1 and indexable only if the first value obtained before the predicate is evaluated is Unicode and the comparison does not meet any of the conditions in note 13. 15. Under either of these circumstances, the predicate is stage 2: v noncol expr is a case expression. v All of the following conditions are true: noncol expr is the product or the quotient of two noncolumn expressions noncol expr is an integer value COL is a FLOAT or a DECIMAL column 16. If COL has the ROWID data type, DB2 tries to use direct row access instead of index access or a table space scan. 17. If COL has the ROWID data type, and an index is defined on COL, DB2 tries to use direct row access instead of index access. 18. IN-list predicates are indexable and stage 1 if the following conditions are true: v The IN list contains only simple items. For example, constants, host variables, parameter markers, and special registers. v The IN list does not contain any aggregate functions or scalar functions. v The IN list is not contained in a triggers WHEN clause. v For numeric predicates where the left side column is DECIMAL with precision greater than 15, none of the items in the IN list are FLOAT.
744
| | | | | | # # # # # # # # # # # # # # # #
23.
v For string predicates, the coded character set identifier is the same as the identifier for the left side column. v For DATE, TIME, and TIMESTAMP predicates, the left side column must be DATE, TIME, or TIMESTAMP. COL IN (corr subq) and EXISTS (corr subq) predicates might become indexable and stage 1 if they are transformed to a join during processing. The predicate types COL IS NULL and COL IS NOT NULL are stage 2 predicates when they query a column that is defined as NOT NULL. If the predicate type is COL IS NULL and the column is defined as NOT NULL, the table is not accessed because C1 cannot be NULL. The ANY and SOME keywords behave similarly. If a predicate with the ANY keyword is not indexable and not stage 1, a similar predicate with the SOME keyword is not indexable and not stage 1. Under either of these circumstances, the predicate is stage 2: v noncol expr is a case expression. v noncol expr is the product or the quotient of two noncolumn expressions, that product or quotient is an integer value, and COL is a FLOAT or a DECIMAL column. COL IN (noncor subq) is stage 1 for type N access only. Otherwise, it is stage 2. If the inner table is as EBCDIC or ASCII column and the outer table is a Unicode column, the predicate is stage 1 and indexable. This type of predicate is not stage 1 when a nullability mismatch is possible.
| |
745
| |
Both predicates are stage 1 but not Boolean terms. The compound is indexable. Multiple-index access for the compound predicate is not possible because no index has C2 as the leading column. For single-index access, C1 and C2 can be only index screening columns. v WHERE C1 IN (cor subq) AND C2=C1 Both predicates are stage 2 and not indexable. The index is not considered for matching-index access, and both predicates are evaluated at stage 2. v WHERE C1=5 AND C2=7 AND (C3 + 5) IN (7,8) The first two predicates only are stage 1 and indexable. The index is considered for matching-index access, and all rows satisfying those two predicates are passed to stage 2 to evaluate the third predicate. v WHERE C1=5 OR C2=7 OR (C3 + 5) IN (7,8) The third predicate is stage 2. The compound predicate is stage 2 and all three predicates are evaluated at stage 2. The simple predicates are not Boolean terms and the compound predicate is not indexable. v WHERE C1=5 OR (C2=7 AND C3=C4) The third predicate is stage 2. The two compound predicates (C2=7 AND C3=C4) and (C1=5 OR (C2=7 AND C3=C4)) are stage 2. All predicates are evaluated at stage 2. v WHERE (C1>5 OR C2=7) AND C3 = C4 The compound predicate (C1>5 OR C2=7) is indexable and stage 1. The simple predicate C3=C4 is not stage1; so the index is not considered for matching-index access. Rows that satisfy the compound predicate (C1>5 OR C2=7) are passed to stage 2 for evaluation of the predicate C3=C4.
746
see . see the discussion of maintaining statistics in the catalog in Part 4 (Volume 1) of DB2 Administration Guide For information on updating the catalog manually, see Updating catalog statistics on page 785. If you intend to update the catalog with statistics of your own choice, you should understand how DB2 uses: v Default filter factors for simple predicates v Filter factors for uniform distributions v Interpolation formulas on page 748 v Filter factors for all distributions on page 749
| |
Col IS NOT DISTINCT FROM Col IS DISTINCT FROM Col IN (literal list) Col Op literal Col LIKE literal Col BETWEEN literal1 and literal2
Note: Op is one of these operators: <, <=, >, >=. Literal is any constant value that is known at bind time.
747
Table 95. DB2 uniform filter factors by predicate type (continued) Predicate type Filter factor 1 (1/COLCARDF) number of literals /COLCARDF interpolation formula interpolation formula interpolation formula interpolation formula
Col IS DISTINCT FROM Col IN (literal list) Col Op1 literal Col Op2 literal Col LIKE literal Col BETWEEN literal1 and literal2
Note: Op1 is < or <=, and the literal is not a host variable. Op2 is > or >=, and the literal is not a host variable. Literal is any constant value that is known at bind time.
Filter factors for other predicate types: The examples selected in Table 94 on page 747 and Table 95 on page 747 represent only the most common types of predicates. If P1 is a predicate and F is its filter factor, then the filter factor of the predicate NOT P1 is (1 - F). But, filter factor calculation is dependent on many things, so a specific filter factor cannot be given for all predicate types.
Interpolation formulas
Definition: For a predicate that uses a range of values, DB2 calculates the filter factor by an interpolation formula. The formula is based on an estimate of the ratio of the number of values in the range to the number of values in the entire column of the table. The formulas: The formulas that follow are rough estimates, subject to further modification by DB2. They apply to a predicate of the form col op. literal. The value of (Total Entries) in each formula is estimated from the values in columns HIGH2KEY and LOW2KEY in catalog table SYSIBM.SYSCOLUMNS for column col: Total Entries = (HIGH2KEY value - LOW2KEY value). v For the operators < and <=, where the literal is not a host variable: (Literal value - LOW2KEY value) / (Total Entries) v For the operators > and >=, where the literal is not a host variable: (HIGH2KEY value - Literal value) / (Total Entries) v For LIKE or BETWEEN: (High literal value - Low literal value) / (Total Entries) Example: For column C2 in a predicate, suppose that the value of HIGH2KEY is 1400 and the value of LOW2KEY is 200. For C2, DB2 calculates (Total Entries) = 1200. For the predicate C1 BETWEEN 800 AND 1100, DB2 calculates the filter factor F as:
F = (1100 - 800)/1200 = 1/4 = 0.25
Interpolation for LIKE: DB2 treats a LIKE predicate as a type of BETWEEN predicate. Two values that bound the range qualified by the predicate are generated from the literal string in the predicate. Only the leading characters found before the escape character (% or _) are used to generate the bounds. So if the escape character is the first character of the string, the filter factor is estimated as 1, and the predicate is estimated to reject no rows.
748
Defaults for interpolation: DB2 might not interpolate in some cases; instead, it can use a default filter factor. Defaults for interpolation are: v Relevant only for ranges, including LIKE and BETWEEN predicates v Used only when interpolation is not adequate v Based on the value of COLCARDF v Used whether uniform or additional distribution statistics exist on the column if either of the following conditions is met: The predicate does not contain constants COLCARDF < 4. Table 96 shows interpolation defaults for the operators <, <=, >, >= and for LIKE and BETWEEN.
Table 96. Default filter factors for interpolation COLCARDF >=100000000 >=10000000 >=1000000 >=100000 >=10000 >=1000 >=100 Factor for Op 1/10,000 1/3,000 1/1,000 1/300 1/100 1/30 1/10 1/3 1/1 1/3 Factor for LIKE or BETWEEN 3/100000 1/10000 3/10000 1/1000 3/1000 1/100 3/100 1/10 1/1 1/10
| | |
>=2 =1 <=0
| | |
749
Table 97. Predicates for which distribution statistics are used Type of statistic Frequency Single column or concatenated columns Single Predicates COL=literal COL IS NULL COL IN (literal-list) COL op literal COL BETWEEN literal AND literal COL=host-variable COL1=COL2 T1.COL=T2.COL COL IS NOT DISTINCT FROM COL=literal COL IS NOT DISTINCT FROM COL=literal COL IS NULL COL IN (literal-list) COL op literal COL BETWEEN literal AND literal COL=host-variable COL1=COL2 T1.COL=T2.COL COL IS NOT DISTINCT FROM COL=literal COL=:host-variable COL1=COL2 COL IS NOT DISTINCT FROM
|
Frequency Concatenated Single
|
Cardinality
|
Cardinality Concatenated
|
Note: op is one of these operators: <, <=, >, >=.
How they are used: Columns COLVALUE and FREQUENCYF in table SYSCOLDIST contain distribution statistics. Regardless of the number of values in those columns, running RUNSTATS deletes the existing values and inserts rows for frequent values. | | | | | | | | | | | | | | | | You can run RUNSTATS without the FREQVAL option, with the FREQVAL option in the correl-spec, with the FREQVAL option in the colgroup-spec, or in both, with the following effects: v If you run RUNSTATS without the FREQVAL option, RUNSTATS inserts rows for the 10 most frequent values for the first column of the specified index. v If you run RUNSTATS with the FREQVAL option in the correl-spec, RUNSTATS inserts rows for concatenated columns of an index. The NUMCOLS option specifies the number of concatenated index columns. The COUNT option specifies the number of frequent values. You can collect most-frequent values, least-frequent values, or both. v If you run RUNSTATS with the FREQVAL option in the colgroup-spec, RUNSTATS inserts rows for the columns in the column group that you specify. The COUNT option specifies the number of frequent values. You can collect most-frequent values, least-frequent values, or both. v If you specify the FREQVAL option, RUNSTATS inserts rows for columns of the specified index and for columns in a column group.
750
See Part 2 of DB2 Utility Guide and Reference for more information about RUNSTATS. DB2 uses the frequencies in column FREQUENCYF for predicates that use the values in column COLVALUE and assumes that the remaining data are uniformly distributed. Example: Filter factor for a single column Suppose that the predicate is C1 IN (3,5) and that SYSCOLDIST contains these values for column C1:
COLVALUE 3 5 8 FREQUENCYF .0153 .0859 .0627
The filter factor is .0153 + .0859 = .1012. Example: Filter factor for correlated columns | Suppose that columns C1 and C2 are correlated. Suppose also that the predicate is C1=3 AND C2=5 and that SYSCOLDIST contains these values for columns C1 and C2:
COLVALUE 1 1 2 2 3 3 3 5 4 4 5 3 5 5 6 6 FREQUENCYF .1176 .0588 .0588 .1176 .0588 .1764 .3529 .0588
Table T1 consists of columns C1, C2, C3, and C4. Index I1 is defined on table T1 and contains columns C1, C2, and C3. Suppose that the simple predicates in the compound predicate have the following characteristics: C1='A' C3='B' C4='C' Matching predicate Screening predicate Stage 1, nonindexable predicate
To determine the cost of accessing table T1 through index I1, DB2 performs these steps: 1. Estimates the matching index cost. DB2 determines the index matching filter factor by using single-column cardinality and single-column frequency statistics because only one column can be a matching column.
751
| | | | | | | | | | | | |
2. Estimates the total index filtering. This includes matching and screening filtering. If statistics exist on column group (C1,C3), DB2 uses those statistics. Otherwise DB2 uses the available single-column statistics for each of these columns. DB2 will also use FULLKEYCARDF as a bound. Therefore, it can be critical to have column group statistics on column group (C1, C3) to get an accurate estimate. 3. Estimates the table-level filtering. If statistics are available on column group (C1,C3,C4), DB2 uses them. Otherwise, DB2 uses statistics that exist on subsets of those columns. Important: If you supply appropriate statistics at each level of filtering, DB2 is more likely to choose the most efficient access path. You can use RUNSTATS to collect any of the needed statistics.
Column correlation
Two columns of data, A and B of a single table, are correlated if the values in column A do not vary independently of the values in column B. Example: Table 98 is an excerpt from a large single table. Columns CITY and STATE are highly correlated, and columns DEPTNO and SEX are entirely independent.
Table 98. Data from the CREWINFO table CITY Fresno Fresno Fresno Fresno New York New York Miami Miami Los Angeles Los Angeles STATE CA CA CA CA NY NY FL FL CA CA DEPTNO A345 J123 J123 J123 J123 A345 B499 A345 X987 A345 SEX F M F F M M M F M M EMPNO 27375 12345 93875 52325 19823 15522 83825 35785 12131 38251 ZIPCODE 93650 93710 93650 93792 09001 09530 33116 34099 90077 90091
In this simple example, for every value of column CITY that equals 'FRESNO', there is the same value in column STATE ('CA').
752
The result of the count of each distinct column is the value of COLCARDF in the DB2 catalog table SYSCOLUMNS. Multiply the previous two values together to get a preliminary result: |
CITYCOUNT x STATECOUNT = ANSWER1
(ANSWER2)
Compare the result of the previous count (ANSWER2) with ANSWER1. If ANSWER2 is less than ANSWER1, then the suspected columns are correlated.
Consider the two compound predicates (labeled PREDICATE1 and PREDICATE2), their actual filtering effects (the proportion of rows they select), and their DB2 filter factors. Unless the proper catalog statistics are gathered, the filter factors are calculated as if the columns of the predicate are entirely independent (not correlated). When the columns in a predicate correlate but the correlation is not reflected in catalog statistics, the actual filtering effect to be significantly different from the DB2 filter factor. Table 99 on page 754 shows how the actual filtering effect and the DB2 filter factor can differ, and how that difference can affect index choice and performance.
753
Table 99. Effects of column correlation on matching columns INDEX 1 Matching predicates Matching columns DB2 estimate for matching columns (Filter Factor) Predicate1 CITY=FRESNO AND STATE=CA 2 column=CITY, COLCARDF=4 Filter Factor=1/4 column=STATE, COLCARDF=3 Filter Factor=1/3 1/4 1/3 = 0.083 0.083 10 = 0.83 INDEX CHOSEN (.8 < 1.25) 4/10 INDEX 2 Predicate2 DEPTNO=A345 AND SEX=F 2 column=DEPTNO, COLCARDF=4 Filter Factor=1/4 column=SEX, COLCARDF=2 Filter Factor=1/2 1/4 1/2 = 0.125 0.125 10 = 1.25 2/10 2/10 10 = 2 BETTER INDEX CHOICE (2 < 4)
Compound filter factor for matching columns Qualified leaf pages based on DB2 estimations Actual filter factor based on data distribution
DB2 chooses an index that returns the fewest rows, partly determined by the smallest filter factor of the matching columns. Assume that filter factor is the only influence on the access path. The combined filtering of columns CITY and STATE seems very good, whereas the matching columns for the second index do not seem to filter as much. Based on those calculations, DB2 chooses Index 1 as an access path for Query 1. The problem is that the filtering of columns CITY and STATE should not look good. Column STATE does almost no filtering. Since columns DEPTNO and SEX do a better job of filtering out rows, DB2 should favor Index 2 over Index 1. Column correlation on index screening columns of an index: Correlation might also occur on nonmatching index columns, used for index screening. See Nonmatching index scan (ACCESSTYPE=I and MATCHCOLS=0) on page 812 for more information. Index screening predicates help reduce the number of data rows that qualify while scanning the index. However, if the index screening predicates are correlated, they do not filter as many data rows as their filter factors suggest. To illustrate this, use Query 1 on page 753 with the following indexes on Table 98 on page 752:
Index 3 (EMPNO,CITY,STATE) Index 4 (EMPNO,DEPTNO,SEX)
In the case of Index 3, because the columns CITY and STATE of Predicate 1 are correlated, the index access is not improved as much as estimated by the screening predicates and therefore Index 4 might be a better choice. (Note that index screening also occurs for indexes with matching columns greater than zero.) Multiple table joins: In Query 2, Table 100 on page 755 is added to the original query (see Query 1 on page 753) to show the impact of column correlation on join queries.
754
Table 100. Data from the DEPTINFO table CITY Fresno Los Angeles STATE CA CA MANAGER Smith Jones DEPT J123 A345 DEPTNAME ADMIN LEGAL
Query 2 SELECT ... FROM CREWINFO T1,DEPTINFO T2 WHERE T1.CITY = FRESNO AND T1.STATE=CA AND T1.DEPTNO = T2.DEPT AND T2.DEPTNAME = LEGAL;
(PREDICATE 1)
The order that tables are accessed in a join statement affects performance. The estimated combined filtering of Predicate1 is lower than its actual filtering. So table CREWINFO might look better as the first table accessed than it should. Also, due to the smaller estimated size for table CREWINFO, a nested loop join might be chosen for the join method. But, if many rows are selected from table CREWINFO because Predicate1 does not filter as many rows as estimated, then another join method or join sequence might be better.
| | | | |
where: v The first three index keys are used (MATCHCOLS = 3). v An index exists on C1, C2, C3, C4, C5. v Some or all of the columns in the index are correlated in some way. See Part 5 (Volume 2) of DB2 Administration Guide for information on using RUNSTATS to influence access path selection.
755
impact on the access path selection and your PLAN_TABLE results. This is because DB2 always uses an index access path when it is cost effective. Generating extra predicates provides more indexable predicates potentially, which creates more chances for an efficient index access path. Therefore, to understand your PLAN_TABLE results, you must understand how DB2 manipulates predicates. The information in Table 93 on page 740 is also helpful.
The outer join operation gives you these result table rows: v The rows with matching values of C1 in tables T1 and T2 (the inner join result) v The rows from T1 where C1 has no corresponding value in T2 v The rows from T2 where C1 has no corresponding value in T1 However, when you apply the predicate, you remove all rows in the result table that came from T2 where C1 has no corresponding value in T1. DB2 transforms the full join into a left join, which is more efficient:
SELECT * FROM T1 X LEFT JOIN T2 Y ON X.C1=Y.C1 WHERE X.C2 > 12;
Example: The predicate, X.C2>12, filters out all null values that result from the right join:
SELECT * FROM T1 X RIGHT JOIN T2 Y ON X.C1=Y.C1 WHERE X.C2>12;
Therefore, DB2 can transform the right join into a more efficient inner join without changing the result:
SELECT * FROM T1 X INNER JOIN T2 Y ON X.C1=Y.C1 WHERE X.C2>12;
The predicate that follows a join operation must have the following characteristics before DB2 transforms an outer join into a simpler outer join or into an inner join:
756
v The predicate is a Boolean term predicate. v The predicate is false if one table in the join operation supplies a null value for all of its columns. These predicates are examples of predicates that can cause DB2 to simplify join operations: T1.C1 > 10 T1.C1 IS NOT NULL T1.C1 > 10 OR T1.C2 > 15 T1.C1 > T2.C1 T1.C1 IN (1,2,4) T1.C1 LIKE 'ABC%' T1.C1 BETWEEN 10 AND 100 12 BETWEEN T1.C1 AND 100 Example: This examples shows how DB2 can simplify a join operation because the query contains an ON clause that eliminates rows with unmatched values:
SELECT * FROM T1 X LEFT JOIN T2 Y FULL JOIN T3 Z ON Y.C1=Z.C1 ON X.C1=Y.C1;
Because the last ON clause eliminates any rows from the result table for which column values that come from T1 or T2 are null, DB2 can replace the full join with a more efficient left join to achieve the same result:
SELECT * FROM T1 X LEFT JOIN T2 Y LEFT JOIN T3 Z ON Y.C1=Z.C1 ON X.C1=Y.C1;
In one case, DB2 transforms a full outer join into a left join when you cannot write code to do it. This is the case where a view specifies a full outer join, but a subsequent query on that view requires only a left outer join. Example: Consider this view:
CREATE VIEW V1 (C1,T1C2,T2C2) AS SELECT COALESCE(T1.C1, T2.C1), T1.C2, T2.C2 FROM T1 X FULL JOIN T2 Y ON T1.C1=T2.C1;
This view contains rows for which values of C2 that come from T1 are null. However, if you execute the following query, you eliminate the rows with null values for C2 that come from T1:
SELECT * FROM V1 WHERE T1C2 > 10;
Therefore, for this query, a left join between T1 and T2 would have been adequate. DB2 can execute this query as if the view V1 was generated with a left outer join so that the query runs more efficiently.
757
A local predicate A join predicate v The query also has a Boolean term predicate on one of the columns in the first predicate with one of the following formats: COL1 op value op is =, <>, >, >=, <, or <=. value is a constant, host variable, or special register. COL1 (NOT) BETWEEN value1 AND value2 COL1=COL3 For outer join queries, DB2 generates predicates for transitive closure if the query has an ON clause of the form COL1=COL2 and a before join predicate that has one of the following formats: v COL1 op value op is =, <>, >, >=, <, or <= v COL1 (NOT) BETWEEN value1 AND value2 DB2 generates a transitive closure predicate for an outer join query only if the generated predicate does not reference the table with unmatched rows. That is, the generated predicate cannot reference the left table for a left outer join or the right table for a right outer join. | | | | | | For a multiple-CCSID query, DB2 does not generate a transitive closure predicate if the predicate that would be generated has these characteristics: v The generated predicate is a range predicate (op is >, >=, <, or <=). v Evaluation of the query with the generated predicate results in different CCSID conversion from evaluation of the query without the predicate. See Chapter 4 of DB2 SQL Reference for information on CCSID conversion. When a predicate meets the transitive closure conditions, DB2 generates a new predicate, whether or not it already exists in the WHERE clause. The generated predicates have one of the following formats: v COL op value op is =, <>, >, >=, <, or <=. value is a constant, host variable, or special register. v COL (NOT) BETWEEN value1 AND value2 v COL1=COL2 (for single-table or inner join queries only) Example of transitive closure for an inner join: Suppose that you have written this query, which meets the conditions for transitive closure:
SELECT * FROM T1, T2 WHERE T1.C1=T2.C1 AND T1.C1>10;
DB2 generates an additional predicate to produce this query, which is more efficient:
SELECT * FROM T1, T2 WHERE T1.C1=T2.C1 AND T1.C1>10 AND T2.C1>10;
758
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example of transitive closure for an outer join: Suppose that you have written this outer join query:
SELECT * FROM (SELECT T1.C1 FROM T1 WHERE T1.C1>10) X LEFT JOIN (SELECT T2.C1 FROM T2) Y ON X.C1 = Y.C1;
The before join predicate, T1.C1>10, meets the conditions for transitive closure, so DB2 generates a query that has the same result as this more-efficient query:
SELECT * FROM (SELECT T1.C1 FROM T1 WHERE T1.C1>10) X LEFT JOIN (SELECT T2.C1 FROM T2 WHERE T2.C1>10) Y ON X.C1 = Y.C1;
Predicate redundancy: A predicate is redundant if evaluation of other predicates in the query already determines the result that the predicate provides. You can specify redundant predicates or DB2 can generate them. DB2 does not determine that any of your query predicates are redundant. All predicates that you code are evaluated at execution time regardless of whether they are redundant. If DB2 generates a redundant predicate to help select access paths, that predicate is ignored at execution. Adding extra predicates: DB2 performs predicate transitive closure only on equal and range predicates. However, you can help DB2 to choose a better access path by adding transitive closure predicates for other types of operators, such as IN or LIKE. For example, consider the following SELECT statement:
SELECT * FROM T1,T2 WHERE T1.C1=T2.C1 AND T1.C1 LIKE A%;
If T1.C1=T2.C1 is true, and T1.C1 LIKE A% is true, then T2.C1 LIKE A% must also be true. Therefore, you can give DB2 extra information for evaluating the query by adding T2.C1 LIKE A%:
SELECT * WHERE AND AND FROM T1,T2 T1.C1=T2.C1 T1.C1 LIKE A% T2.C1 LIKE A%;
759
DB2 often chooses an access path that performs well for a query with several host variables. However, in a new release or after maintenance has been applied, DB2 might choose a new access path that does not perform as well as the old access path. In many cases, the change in access paths is due to the default filter factors, which might lead DB2 to optimize the query in a different way. The two ways to change the access path for a query that contains host variables are: v Bind the package or plan that contains the query with the option REOPT(ALWAYS) or the option REOPT(ONCE). v Rewrite the query. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
REOPT(ONCE)
REOPT(NONE)
760
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example: To determine which queries in plans and packages that are bound with the REOPT(ALWAYS) bind option will be reoptimized at run time, execute the following SELECT statements:
SELECT PLNAME, CASE WHEN STMTNOI <> 0 THEN STMTNOI ELSE STMTNO END AS STMTNUM, SEQNO, TEXT FROM SYSIBM.SYSSTMT WHERE STATUS IN (B,F,G,J) ORDER BY PLNAME, STMTNUM, SEQNO; SELECT COLLID, NAME, VERSION, CASE WHEN STMTNOI <> 0 THEN STMTNOI ELSE STMTNO END AS STMTNUM, SEQNO, STMT FROM SYSIBM.SYSPACKSTMT WHERE STATUS IN (B,F,G,J) ORDER BY COLLID, NAME, VERSION, STMTNUM, SEQNO;
If you specify the bind option VALIDATE(RUN), and a statement in the plan or package is not bound successfully, that statement is incrementally bound at run time. If you also specify the bind option REOPT(ALWAYS), DB2 reoptimizes the access path during the incremental bind. Example: To determine which plans and packages have statements that will be incrementally bound, execute the following SELECT statements:
SELECT DISTINCT NAME FROM SYSIBM.SYSSTMT WHERE STATUS = F OR STATUS = H; SELECT DISTINCT COLLID, NAME, VERSION FROM SYSIBM.SYSPACKSTMT WHERE STATUS = F OR STATUS = H;
761
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
To use the REOPT(ONCE) bind option most efficiently, first determine which dynamic SQL statements in your applications perform poorly with the REOPT(NONE) bind option and the REOPT(ALWAYS) bind option. Separate the code containing those statements into units that you bind into packages with the REOPT(ONCE) option. Bind the rest of the code into packages using the REOPT(NONE) bind option or the REOPT(ALWAYS) bind option, as appropriate. Then bind the plan with the REOPT(NONE) bind option. A dynamic statement in a package that is bound with REOPT(ONCE) is a candidate for reoptimization the first time that the statement is run. Example: To determine which queries in plans and packages that are bound with the REOPT(ONCE) bind option will be reoptimized at run time, execute the following SELECT statements:
SELECT PLNAME, CASE WHEN STMTNOI <> 0 THEN STMTNOI ELSE STMTNO END AS STMTNUM, SEQNO, TEXT FROM SYSIBM.SYSSTMT WHERE STATUS IN (J) ORDER BY PLNAME, STMTNUM, SEQNO; SELECT COLLID, NAME, VERSION, CASE WHEN STMTNOI <> 0 THEN STMTNOI ELSE STMTNO END AS STMTNUM, SEQNO, STMT FROM SYSIBM.SYSPACKSTMT WHERE STATUS IN (J) ORDER BY COLLID, NAME, VERSION, STMTNUM, SEQNO;
If you specify the bind option VALIDATE(RUN), and a statement in the plan or package is not bound successfully, that statement is incrementally bound at run time. Example: To determine which plans and packages have statements that will be incrementally bound, execute the following SELECT statements:
SELECT DISTINCT NAME FROM SYSIBM.SYSSTMT WHERE STATUS = F OR STATUS = H; SELECT DISTINCT COLLID, NAME, VERSION FROM SYSIBM.SYSPACKSTMT WHERE STATUS = F OR STATUS = H;
762
Assumptions: Because the column SEX has only two different values, 'M' and 'F', the value COLCARDF for SEX is 2. If the numbers of male and female employees are not equal, the actual filter factor of 1/2 is larger or smaller than the default, depending on whether :HV1 is set to 'M' or 'F'. Recommendation: One of these two actions can improve the access path: v Bind the package or plan that contains the query with the REOPT(ALWAYS) bind option. This action causes DB2 to reoptimize the query at run time, using the input values you provide. You might also consider binding the package or plan with the REOPT(ONCE) bind option. v Write predicates to influence the DB2 selection of an access path, based on your knowledge of actual filter factors. For example, you can break the query into three different queries, two of which use constants. DB2 can then determine the exact filter factor for most cases when it binds the plan.
SELECT (HV1); WHEN ('M') DO; EXEC SQL SELECT * FROM DSN8810.EMP WHERE SEX = 'M'; END; WHEN ('F') DO; EXEC SQL SELECT * FROM DSN8810.EMP WHERE SEX = 'F'; END; OTHERWISE
Chapter 26. Tuning your queries
763
DO: EXEC SQL SELECT * FROM DSN8810.EMP WHERE SEX = :HV1; END; END;
Example 2: Known ranges Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2. Query:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND C2 BETWEEN :HV3 AND :HV4;
Assumptions: You know that: v The application always provides a narrow range on C1 and a wide range on C2. v The desired access path is through index T1X1. Recommendation: If DB2 does not choose T1X1, rewrite the query as follows, so that DB2 does not choose index T1X2 on C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Example 3: Variable ranges Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2. Query:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND C2 BETWEEN :HV3 AND :HV4;
Assumptions: You know that the application provides both narrow and wide ranges on C1 and C2. Hence, default filter factors do not allow DB2 to choose the best access path in all cases. For example, a small range on C1 favors index T1X1 on C1, a small range on C2 favors index T1X2 on C2, and wide ranges on both C1 and C2 favor a table space scan. Recommendation: If DB2 does not choose the best access path, try either of the following changes to your application: v Use a dynamic SQL statement and embed the ranges of C1 and C2 in the statement. With access to the actual range values, DB2 can estimate the actual filter factors for the query. Preparing the statement each time it is executed requires an extra step, but it can be worthwhile if the query accesses a large amount of data. v Include some simple logic to check the ranges of C1 and C2, and then execute one of these static SQL statements, based on the ranges of C1 and C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1); SELECT * FROM T1 WHERE C2 BETWEEN :HV3 AND :HV4 AND (C1 BETWEEN :HV1 AND :HV2 OR 0=1); SELECT * FROM T1 WHERE (C1 BETWEEN :HV1 AND :HV2 OR 0=1) AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
764
Example 4: ORDER BY Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2. Query:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 ORDER BY C2;
In this example, DB2 could choose one of the following actions: v Scan index T1X1 and then sort the results by column C2 v Scan the table space in which T1 resides and then sort the results by column C2 v Scan index T1X2 and then apply the predicate to each row of data, thereby avoiding the sort Which choice is best depends on the following factors: v The number of rows that satisfy the range predicate v The cluster ratio of the indexes If the actual number of rows that satisfy the range predicate is significantly different from the estimate, DB2 might not choose the best access path. Assumptions: You disagree with the DB2 choice. Recommendation: In your application, use a dynamic SQL statement and embed the range of C1 in the statement. That allows DB2 to use the actual filter factor rather than the default, but requires extra processing for the PREPARE statement. Example 5: A join operation Tables A, B, and C each have indexes on columns C1, C2, C3, and C4. Assumptions: The actual filter factors on table A are much larger than the default factors. Hence, DB2 underestimates the number of rows selected from table A and wrongly chooses that as the first table in the join. Recommendations: You can: v Reduce the estimated size of Table A by adding predicates v Disfavor any index on the join column by making the join predicate on table A nonindexable Example: The following query illustrates the second of those choices.
SELECT * FROM T1 A, T1 B, WHERE (A.C1 = B.C1 AND A.C2 = C.C2 AND A.C2 BETWEEN AND A.C3 BETWEEN AND A.C4 < :HV5 AND B.C2 BETWEEN AND B.C3 < :HV8 AND C.C2 < :HV9; T1 C OR 0=1) :HV1 AND :HV2 :HV3 AND :HV4 :HV6 AND :HV7
The result of making the join predicate between A and B a nonindexable predicate (which cannot be used in single index access) disfavors the use of the index on column C1. This, in turn, might lead DB2 to access table A or B first. Or, it might lead DB2 to change the access type of table A or B, thereby influencing the join sequence of the other tables.
Chapter 26. Tuning your queries
765
Correlated subqueries
Definition: A correlated subquery refers to at least one column of the outer query. Any predicate that contains a correlated subquery is a stage 2 predicate unless it is transformed to a join. Example: In the following query, the correlation name, X, illustrates the subquerys reference to the outer query block.
SELECT * FROM DSN8810.EMP X WHERE JOB = DESIGNER AND EXISTS (SELECT 1 FROM DSN8810.PROJ WHERE DEPTNO = X.WORKDEPT AND MAJPROJ = MA2100);
What DB2 does: A correlated subquery is evaluated for each qualified row of the outer query that is referred to. In executing the example, DB2: 1. Reads a row from table EMP where JOB=DESIGNER. 2. Searches for the value of WORKDEPT from that row, in a table stored in memory. The in-memory table saves executions of the subquery. If the subquery has already been executed with the value of WORKDEPT, the result of the subquery is in the table and DB2 does not execute it again for the current row. Instead, DB2 can skip to step 5. 3. Executes the subquery, if the value of WORKDEPT is not in memory. That requires searching the PROJ table to check whether there is any project, where MAJPROJ is MA2100, for which the current WORKDEPT is responsible. 4. Stores the value of WORKDEPT and the result of the subquery in memory. 5. Returns the values of the current row of EMP to the application.
766
DB2 repeats this whole process for each qualified row of the EMP table. Notes on the in-memory table: The in-memory table is applicable if the operator of the predicate that contains the subquery is one of the following operators: <, <=, >, >=, =, <>, EXISTS, NOT EXISTS The table is not used, however, if: v There are more than 16 correlated columns in the subquery v The sum of the lengths of the correlated columns is more than 256 bytes v There is a unique index on a subset of the correlated columns of a table from the outer query The in-memory table is a wrap-around table and does not guarantee saving the results of all possible duplicated executions of the subquery.
Noncorrelated subqueries
Definition: A noncorrelated subquery makes no reference to outer queries. Example:
SELECT * FROM DSN8810.EMP WHERE JOB = DESIGNER AND WORKDEPT IN (SELECT DEPTNO FROM DSN8810.PROJ WHERE MAJPROJ = MA2100);
What DB2 does: A noncorrelated subquery is executed once when the cursor is opened for the query. What DB2 does to process it depends on whether it returns a single value or more than one value. The query in the preceding example can return more than one value.
Single-value subqueries
When the subquery is contained in a predicate with a simple operator, the subquery is required to return 1 or 0 rows. The simple operator can be one of the following operators: <, <=, >, >=, =, <>, NOT <, NOT <=, NOT >, NOT >= The following noncorrelated subquery returns a single value:
SELECT FROM WHERE AND * DSN8810.EMP JOB = DESIGNER WORKDEPT <= (SELECT MAX(DEPTNO) FROM DSN8810.PROJ);
What DB2 does: When the cursor is opened, the subquery executes. If it returns more than one row, DB2 issues an error. The predicate that contains the subquery is treated like a simple predicate with a constant specified, for example, WORKDEPT <= value. | | | Stage 1 and stage 2 processing: The rules for determining whether a predicate with a noncorrelated subquery that returns a single value is stage 1 or stage 2 are generally the same as for the same predicate with a single variable.
767
Multiple-value subqueries
A subquery can return more than one value if the operator is one of the following: op ANY, op ALL , op SOME, IN, EXISTS where op is any of the operators >, >=, <, <=, NOT <, NOT <=, NOT >, NOT >=. What DB2 does: If possible, DB2 reduces a subquery that returns more than one row to one that returns only a single row. That occurs when there is a range comparison along with ANY, ALL, or SOME. The following query is an example:
SELECT * FROM DSN8810.EMP WHERE JOB = DESIGNER AND WORKDEPT <= ANY (SELECT DEPTNO FROM DSN8810.PROJ WHERE MAJPROJ = MA2100);
DB2 calculates the maximum value for DEPTNO from table DSN8810.PROJ and removes the ANY keyword from the query. After this transformation, the subquery is treated like a single-value subquery. That transformation can be made with a maximum value if the range operator is: v > or >= with the quantifier ALL v < or <= with the quantifier ANY or SOME The transformation can be made with a minimum value if the range operator is: v < or <= with the quantifier ALL v > or >= with the quantifier ANY or SOME The resulting predicate is determined to be stage 1 or stage 2 by the same rules as for the same predicate with a single-valued subquery. When a subquery is sorted: A noncorrelated subquery is sorted when the comparison operator is IN, NOT IN, = ANY, <> ANY, = ALL, or <> ALL. The sort enhances the predicate evaluation, reducing the amount of scanning on the subquery result. When the value of the subquery becomes smaller or equal to the expression on the left side, the scanning can be stopped and the predicate can be determined to be true or false. When the subquery result is a character data type and the left side of the predicate is a datetime data type, then the result is placed in a work file without sorting. For some noncorrelated subqueries that use IN, NOT IN, = ANY, <> ANY, = ALL, or <> ALL comparison operators, DB2 can more accurately pinpoint an entry point into the work file, thus further reducing the amount of scanning that is done. Results from EXPLAIN: For information about the result in a plan table for a subquery that is sorted, see When are aggregate functions evaluated? (COLUMN_FN_EVAL) on page 808.
768
| | | | | |
v The subquery does not contain GROUP BY, HAVING, or aggregate functions. v The subquery has only one table in the FROM clause. v For a correlated subquery, the comparison operator of the predicate containing the subquery is IN, = ANY, or = SOME. v For a noncorrelated subquery, the comparison operator of the predicate containing the subquery is IN, EXISTS, = ANY, or = SOME. v For a noncorrelated subquery, the subquery select list has only one column, guaranteed by a unique index to have unique values. v For a noncorrelated subquery, the left side of the predicate is a single column with the same data type and length as the subquerys column. (For a correlated subquery, the left side can be any expression.) For an UPDATE or DELETE statement, or a SELECT statement that does not meet the previous conditions for transformation, DB2 does the transformation of a correlated subquery into a join if the following conditions are true: v The transformation does not introduce redundancy. v The subquery is correlated to its immediate outer query. v The FROM clause of the subquery contains only one table, and the outer query (for SELECT), UPDATE, or DELETE references only one table. v If the outer predicate is a quantified predicate with an operator of =ANY or an IN predicate, the following conditions are true: The left side of the outer predicate is a single column. The right side of the outer predicate is a subquery that references a single column. The two columns have the same data type and length. v The subquery does not contain the GROUP BY or DISTINCT clauses. v The subquery does not contain aggregate functions. v The SELECT clause of the subquery does not contain a user-defined function with an external action or a user-defined function that modifies data. The subquery predicate is a Boolean term predicate. The predicates in the subquery that provide correlation are stage 1 predicates. The subquery does not contain nested subqueries. The subquery does not contain a self-referencing UPDATE or DELETE. For a SELECT statement, the query does not contain the FOR UPDATE OF clause. v For an UPDATE or DELETE statement, the statement is a searched UPDATE or DELETE. v For a SELECT statement, parallelism is not enabled. v v v v v For a statement with multiple subqueries, DB2 does the transformation only on the last subquery in the statement that qualifies for transformation. Example: The following subquery can be transformed into a join because it meets the first set of conditions for transformation:
SELECT * FROM EMP WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT WHERE LOCATION IN (SAN JOSE, SAN FRANCISCO) AND DIVISION = MARKETING);
769
If there is a department in the marketing division which has branches in both San Jose and San Francisco, the result of the SQL statement is not the same as if a join were done. The join makes each employee in this department appear twice because it matches once for the department of location San Jose and again of location San Francisco, although it is the same department. Therefore, it is clear that to transform a subquery into a join, the uniqueness of the subquery select list must be guaranteed. For this example, a unique index on any of the following sets of columns would guarantee uniqueness: v (DEPTNO) v (DIVISION, DEPTNO) v (DEPTNO, DIVISION). The resultant query is:
SELECT EMP.* FROM EMP, DEPT WHERE EMP.DEPTNO = DEPT.DEPTNO AND DEPT.LOCATION IN (SAN JOSE, SAN FRANCISCO) AND DEPT.DIVISION = MARKETING;
Example: The following subquery can be transformed into a join because it meets the second set of conditions for transformation:
UPDATE T1 SET T1.C1 = 1 WHERE T1.C1 =ANY (SELECT T2.C1 FROM T2 WHERE T2.C2 = T1.C2);
Results from EXPLAIN: For information about the result in a plan table for a subquery that is transformed into a join operation, see Is a subquery transformed into a join? on page 808.
Subquery tuning
The following three queries all retrieve the same rows. All three retrieve data about all designers in departments that are responsible for projects that are part of major project MA2100. These three queries show that there are several ways to retrieve a desired result. Query A: A join of two tables
SELECT DSN8810.EMP.* FROM DSN8810.EMP, DSN8810.PROJ WHERE JOB = DESIGNER AND WORKDEPT = DEPTNO AND MAJPROJ = MA2100;
If you need columns from both tables EMP and PROJ in the output, you must use a join.
770
PROJ might contain duplicate values of DEPTNO in the subquery, so that an equivalent join cannot be written. In general, query A might be the one that performs best. However, if there is no index on DEPTNO in table PROJ, then query C might perform best. The IN-subquery predicate in query C is indexable. Therefore, if an index on WORKDEPT exists, DB2 might do IN-list access on table EMP. If you decide that a join cannot be used and there is an available index on DEPTNO in table PROJ, then query B might perform best. When looking at a problem subquery, see if the query can be rewritten into another format or see if there is an index that you can create to help improve the performance of the subquery. Knowing the sequence of evaluation is important, for the different subquery predicates and for all other predicates in the query. If the subquery predicate is costly, perhaps another predicate could be evaluated before that predicate so that the rows would be rejected before even evaluating the problem subquery predicate.
| | | | | |
771
# # # # # # # # # # # #
In a local environment, if you need to scroll through a limited subset of rows in a table, you can use FETCH FIRST n ROWS ONLY to make the result table smaller. v In a distributed environment, if you do not need to use your scrollable cursors to modify data, do your cursor processing in a stored procedure. Using stored procedures can decrease the amount of network traffic that your application requires. v In a TEMP database, create table spaces that are large enough for processing your scrollable cursors. DB2 uses declared temporary tables for processing the following types of scrollable cursors: SENSITIVE STATIC SCROLL INSENSITIVE SCROLL ASENSITIVE SCROLL, if the cursor sensitivity is INSENSITIVE. A cursor that meets the criteria for a read-only cursor has an effective sensitivity of INSENSITIVE. See the DECLARE CURSOR statement in DB2 SQL Referencefor more information about cursor sensitivity. See DB2 Installation Guide for more information about calculating the appropriate size for declared temporary tables for cursors. v Remember to commit changes often for the following reasons: You frequently need to leave scrollable cursors open longer than non-scrollable cursors. There is an increased chance of deadlocks with scrollable cursors because scrollable cursors allow rows to be accessed and updated in any order. Frequent commits can decrease the chances of deadlocks. To prevent cursors from closing after commit operations, declare your scrollable cursors WITH HOLD.
| | |
# # # # # # # # # # # | | | | | | | | | | |
v While sensitive static sensitive scrollable cursors are open against a table, DB2 will disallow reuse of space in that table space to prevent the scrollable cursor from fetching newly inserted rows that were not in the original result set. Although this is normal, it can result in a seemingly false out-of-space indication. The problem can be more noticeable in a data sharing environment with transactions that access LOBs. Consider the following preventive measures: Check applications such that they commit frequently Close sensitive scrollable cursors when no longer needed Remove WITH HOLD parm for the sensitive scrollable cursor, if possible Isolate LOB table spaces in a dedicated bufferpool in the data sharing environment
772
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
a predicate in which a column is compared to a constant. A limited partition scan occurs at run time if the column is compared to a host variable, parameter marker, or special register. The following example demonstrates how you can use a partitioning index to enable a limited partition scan on a set of partitions that DB2 needs to examine to satisfy a query predicate. Suppose that you create table Q1, with partitioning index DATE_IX and DPSI STATE_IX:
CREATE TABLESPACE TS1 NUMPARTS 3; CREATE TABLE Q1 (DATE DATE, CUSTNO CHAR(5), STATE CHAR(2), PURCH_AMT DECIMAL(9,2)) IN TS1 PARTITION BY (DATE) (PARTITION 1 ENDING AT (2002-1-31), PARTITION 2 ENDING AT (2002-2-28), PARTITION 3 ENDING AT (2002-3-31)); CREATE INDEX DATE_IX ON Q1 (DATE) PARTITIONED CLUSTER; CREATE INDEX STATE_IX ON Q1 (STATE) PARTITIONED;
Now suppose that you want to execute the following query against table Q1:
SELECT CUSTNO, PURCH_AMT FROM Q1 WHERE STATE = CA;
Because the predicate is based only on values of a DPSI key (STATE), DB2 must examine all partitions to find the matching rows. Now suppose that you modify the query in the following way:
SELECT CUSTNO, PURCH_AMT FROM Q1 WHERE DATE BETWEEN 2002-01-01 AND 2002-01-31 AND STATE = CA;
Because the predicate is now based on values of a partitioning index key (DATE) and on values of a DPSI key (STATE), DB2 can eliminate the scanning of data partitions 2 and 3, which do not satisfy the query for the partitioning key. This can be determined at bind time because the columns of the predicate are compared to constants. Now suppose that you use host variables instead of constants in the same query:
SELECT CUSTNO, PURCH_AMT FROM Q1 WHERE DATE BETWEEN :hv1 AND :hv2 AND STATE = :hv3;
DB2 can use the predicate on the partitioning column to eliminate the scanning of unneeded partitions at run time. Writing queries to take advantage of limited partition scan is especially useful when a correlation exists between columns that are in a partitioning index and columns that are in a DPSI.
773
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
For example, suppose that you create table Q2, with partitioning index DATE_IX and DPSI ORDERNO_IX:
CREATE TABLESPACE TS2 NUMPARTS 3; CREATE TABLE Q2 (DATE DATE, ORDERNO CHAR(8), STATE CHAR(2), PURCH_AMT DECIMAL(9,2)) IN TS2 PARTITION BY (DATE) (PARTITION 1 ENDING AT (2000-12-31), PARTITION 2 ENDING AT (2001-12-31), PARTITION 3 ENDING AT (2002-12-31)); CREATE INDEX DATE_IX ON Q2 (DATE) PARTITIONED CLUSTER; CREATE INDEX ORDERNO_IX ON Q2 (ORDERNO) PARTITIONED;
Also suppose that the first 4 bytes of each ORDERNO column value represent the four-digit year in which the order is placed. This means that the DATE column and the ORDERNO column are correlated. To take advantage of limited partition scan, when you write a query that has the ORDERNO column in the predicate, also include the DATE column in the predicate. The partitioning index on DATE lets DB2 eliminate the scanning of partitions that are not needed to satisfy the query. For example:
SELECT ORDERNO, PURCH_AMT FROM Q2 WHERE ORDERNO BETWEEN 2002AAAA AND 2002ZZZZ AND DATE BETWEEN 2002-01-01 AND 2002-12-31;
774
v Using the CARDINALITY clause to improve the performance of queries with user-defined table function references on page 779 v Reducing the number of matching columns on page 780 v Rearranging the order of tables in a FROM clause on page 784 v Updating catalog statistics on page 785 v Using a subsystem parameter on page 786
775
Example: Suppose that you write an application that requires information on only the 20 employees with the highest salaries. To return only the rows of the employee table for those 20 employees, you can write a query like this:
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY FROM EMP ORDER BY SALARY DESC FETCH FIRST 20 ROWS ONLY;
Interaction between OPTIMIZE FOR n ROWS and FETCH FIRST n ROWS ONLY: In general, if you specify FETCH FIRST n ROWS ONLY but not OPTIMIZE FOR n ROWS in a SELECT statement, DB2 optimizes the query as if you had specified OPTIMIZE FOR n ROWS. | | | | | | | | When both the FETCH FIRST n ROWS ONLY clause and the OPTIMIZE FOR n ROWS clause are specified, the value for the OPTIMIZE FOR n ROWS clause is used for access path selection. Example: Suppose that you submit the following SELECT statement:
SELECT * FROM EMP FETCH FIRST 5 ROWS ONLY OPTIMIZE FOR 20 ROWS;
The OPTIMIZE FOR value of 20 rows is used for access path selection.
776
whenever possible, DB2 avoids any access path that involves a sort. If you specify a value for n that is anything but 1, DB2 chooses an access path based on cost, and you wont necessarily avoid sorts. How to specify OPTIMIZE FOR n ROWS for a CLI application: For a Call Level Interface (CLI) application, you can specify that DB2 uses OPTIMIZE FOR n ROWS for all queries. To do that, specify the keyword OPTIMIZEFORNROWS in the initialization file. For more information, see Chapter 3 of DB2 ODBC Guide and Reference. How many rows you can retrieve with OPTIMIZE FOR n ROWS: The OPTIMIZE FOR n ROWS clause does not prevent you from retrieving all the qualifying rows. However, if you use OPTIMIZE FOR n ROWS, the total elapsed time to retrieve all the qualifying rows might be significantly greater than if DB2 had optimized for the entire result set. When OPTIMIZE FOR n ROWS is effective: OPTIMIZE FOR n ROWS is effective only on queries that can be performed incrementally. If the query causes DB2 to gather the whole result set before returning the first row, DB2 ignores the OPTIMIZE FOR n ROWS clause, as in the following situations: v The query uses SELECT DISTINCT or a set function distinct, such as COUNT(DISTINCT C1). v Either GROUP BY or ORDER BY is used, and no index can give the necessary ordering. v A aggregate function and no GROUP BY clause is used. v The query uses UNION. Example: Suppose that you query the employee table regularly to determine the employees with the highest salaries. You might use a query like this:
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY FROM EMP ORDER BY SALARY DESC;
An index is defined on column EMPNO, so employee records are ordered by EMPNO. If you have also defined a descending index on column SALARY, that index is likely to be very poorly clustered. To avoid many random, synchronous I/O operations, DB2 would most likely use a table space scan, then sort the rows on SALARY. This technique can cause a delay before the first qualifying rows can be returned to the application. If you add the OPTIMIZE FOR n ROWS clause to the statement, DB2 will probably use the SALARY index directly because you have indicated that you expect to retrieve the salaries of only the 20 most highly paid employees. Example: The following statement uses that strategy to avoid a costly sort operation:
SELECT LASTNAME,FIRSTNAME,EMPNO,SALARY FROM EMP ORDER BY SALARY DESC OPTIMIZE FOR 20 ROWS;
Effects of using OPTIMIZE FOR n ROWS: v The join method could change. Nested loop join is the most likely choice, because it has low overhead cost and appears to be more efficient if you want to retrieve only one row.
Chapter 26. Tuning your queries
777
v An index that matches the ORDER BY clause is more likely to be picked. This is because no sort would be needed for the ORDER BY. v List prefetch is less likely to be picked. v Sequential prefetch is less likely to be requested by DB2 because it infers that you only want to see a small number of rows. v In a join query, the table with the columns in the ORDER BY clause is likely to be picked as the outer table if there is an index on that outer table that gives the ordering needed for the ORDER BY clause. Recommendation: For a local query, specify OPTIMIZE FOR n ROWS only in applications that frequently fetch only a small percentage of the total rows in a query result set. For example, an application might read only enough rows to fill the end user's terminal screen. In cases like this, the application might read the remaining part of the query result set only rarely. For an application like this, OPTIMIZE FOR n ROWS can result in better performance by causing DB2 to favor SQL access paths that deliver the first n rows as fast as possible. When you specify OPTIMIZE FOR n ROWS for a remote query, a small value of n can help limit the number of rows that flow across the network on any given transmission. You can improve the performance for receiving a large result set through a remote query by specifying a large value of n in OPTIMIZE FOR n ROWS. When you specify a large value, DB2 attempts to send the n rows in multiple transmissions. For better performance when retrieving a large result set, in addition to specifying OPTIMIZE FOR n ROWS with a large value of n in your query, do not execute other SQL statements until the entire result set for the query is processed. If retrieval of data for several queries overlaps, DB2 might need to buffer result set data in the DDF address space. See "Block fetching result sets" in Part 5 (Volume 2) of DB2 Administration Guide for more information. For local or remote queries, to influence the access path most, specify OPTIMIZE for 1 ROW. This value does not have a detrimental effect on distributed queries. | | | | | | | | | | # # # | | | | |
778
| | | # # # # # # # # # # # # | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
and index access might not be appropriate for many queries. Defining tables as volatile lets you limit the set of queries that favor index access to queries that involve the volatile tables.
Using the CARDINALITY clause to improve the performance of queries with user-defined table function references
The cardinality of a user-defined table function is the number of rows that are returned when the function is invoked. DB2 uses this number to estimate the cost of executing a query that invokes a user-defined table function. The cost of executing a query is one of the factors that DB2 uses when it calculates the access path. Therefore, if you give DB2 an accurate estimate of a user-defined table functions cardinality, DB2 can better calculate the best access path. You can specify a cardinality value for a user-defined table function by using the CARDINALITY clause of the SQL CREATE FUNCTION or ALTER FUNCTION statement. However, this value applies to all invocations of the function, whereas a user-defined table function might return different numbers of rows, depending on the query in which it is referenced. To give DB2 a better estimate of the cardinality of a user-defined table function for a particular query, you can use the CARDINALITY or CARDINALITY MULTIPLIER clause in that query. DB2 uses those clauses at bind time when it calculates the access cost of the user-defined table function. Using this clause is recommended only for programs that run on DB2 UDB for z/OS because the clause is not supported on earlier versions of DB2. Example of using the CARDINALITY clause to specify the cardinality of a user-defined table function invocation: Suppose that when you created user-defined table function TUDF1, you set a cardinality value of 5, but in the following query, you expect TUDF1 to return 30 rows:
SELECT * FROM TABLE(TUDF1(3)) AS X;
Add the CARDINALITY 30 clause to tell DB2 that, for this query, TUDF1 should return 30 rows:
SELECT * FROM TABLE(TUDF1(3) CARDINALITY 30) AS X;
779
| | | | | | | | | |
Example of using the CARDINALITY MULTIPLIER clause to specify the cardinality of a user-defined table function invocation: Suppose that when you created user-defined table function TUDF2, you set a cardinality value of 5, but in the following query, you expect TUDF2 to return 30 times that many rows:
SELECT * FROM TABLE(TUDF2(10)) AS X;
Add the CARDINALITY MULTIPLIER 30 clause to tell DB2 that, for this query, TUDF1 should return 5*30, or 150, rows:
SELECT * FROM TABLE(TUDF2(10) CARDINALITY MULTIPLIER 30) AS X;
780
CREATE TABLE PART_HISTORY ( PART_TYPE CHAR(2), IDENTIFIES THE PART TYPE PART_SUFFIX CHAR(10), IDENTIFIES THE PART W_NOW INTEGER, TELLS WHERE THE PART IS W_FROM INTEGER, TELLS WHERE THE PART CAME FROM DEVIATIONS INTEGER, TELLS IF ANYTHING SPECIAL WITH THIS PART COMMENTS CHAR(254), DESCRIPTION CHAR(254), DATE1 DATE, DATE2 DATE, DATE3 DATE); CREATE UNIQUE INDEX IX1 ON PART_HISTORY (PART_TYPE,PART_SUFFIX,W_FROM,W_NOW); CREATE UNIQUE INDEX IX2 ON PART_HISTORY (W_FROM,W_NOW,DATE1); +------------------------------------------------------------------------------+ | Table statistics | Index statistics IX1 IX2 | |--------------------------------+---------------------------------------------| | CARDF 100,000 | FIRSTKEYCARDF 1000 50 | | NPAGES 10,000 | FULLKEYCARDF 100,000 100,000 | | | CLUSTERRATIO 99% 99% | | | NLEAF 3000 2000 | | | NLEVELS 3 3 | |------------------------------------------------------------------------------| | column cardinality HIGH2KEY LOW2KEY | | ------------------------------| | Part_type 1000 ZZ AA | | w_now 50 1000 1 | | w_from 50 1000 1 | +------------------------------------------------------------------------------+ Q1: SELECT * FROM PART_HISTORY WHERE PART_TYPE = BB P1 AND W_FROM = 3 P2 AND W_NOW = 3 P3 ----SELECT ALL PARTS THAT ARE BB TYPES THAT WERE MADE IN CENTER 3 AND ARE STILL IN CENTER 3
+------------------------------------------------------------------------------+ | Filter factor of these predicates. | | P1 = 1/1000= .001 | | P2 = 1/50 = .02 | | P3 = 1/50 = .02 | |------------------------------------------------------------------------------| | ESTIMATED VALUES | WHAT REALLY HAPPENS | | filter data | filter data | | index matchcols factor rows | index matchcols factor rows | | ix2 2 .02*.02 40 | ix2 2 .02*.50 1000 | | ix1 1 .001 100 | ix1 1 .001 100 | +------------------------------------------------------------------------------+
DB2 picks IX2 to access the data, but IX1 would be roughly 10 times quicker. The problem is that 50% of all parts from center number 3 are still in Center 3; they have not moved. Assume that there are no statistics on the correlated columns in catalog table SYSCOLDIST. Therefore, DB2 assumes that the parts from center number 3 are evenly distributed among the 50 centers. You can get the desired access path by changing the query. To discourage the use of IX2 for this particular query, you can change the third predicate to be nonindexable.
SELECT * FROM PART_HISTORY WHERE PART_TYPE = BB AND W_FROM = 3 AND (W_NOW = 3 + 0)
781
Now index I2 is not picked, because it has only one match column. The preferred index, I1, is picked. The third predicate is a nonindexable predicate, so an index is not used for the compound predicate. You can make a predicate nonindexable in many ways. The recommended way is to add 0 to a predicate that evaluates to a numeric value or to concatenate an empty string to a predicate that evaluates to a character value.
Indexable T1.C3=T2.C4 T1.C1=5 Nonindexable (T1.C3=T2.C4 CONCAT '') T1.C1=5+0
These techniques do not affect the result of the query and cause only a small amount of overhead. The preferred technique for improving the access path when a table has correlated columns is to generate catalog statistics on the correlated columns. You can do that either by running RUNSTATS or by updating catalog table SYSCOLDIST manually.
782
v As the correlation of columns in the fact table changes, reevaluate the index to determine if columns in the index should be reordered. v Define indexes on dimension tables to improve access to those tables. v When you have executed a number of queries and have more information about the way that the data is used, follow these recommendations: Put more selective columns at the beginning of the index. If a number of queries do not reference a dimension, put the column that corresponds to that dimension at the end of the index. When a fact table has more than one multi-column index and none of those indexes contains all key columns, DB2 evaluates all of the indexes and uses the index that best exploits star join.
D1...Dn Dimension tables. C1...Cn Key columns in the fact table. C1 is joined to dimension D1, C2 is joined to dimension D2, and so on. cardD1...cardDn Cardinality of columns C1...Cn in dimension tables D1...Dn. cardC1...cardCn Cardinality of key columns C1...Cn in fact table F. cardCij Cardinality of pairs of column values from key columns Ci and Cj in fact table F. cardCijk Cardinality of triplets of column values from key columns Ci, Cj, and Ck in fact table F. Density A measure of the correlation of key columns in the fact table. The density is calculated as follows: For a single column cardCicardDi For pairs of columns cardCij(cardDi*cardDj) For triplets of columns cardCijk(cardDi*cardDj*cardDk) S The current set of columns whose order in the index is not yet determined.
S-{Cm} The current set of columns, excluding column Cm Follow these steps to derive a fact table index for a star-join query that joins n columns of fact table F to n dimension tables D1 through Dn:
783
1. Define the set of columns whose index key order is to be determined as the n columns of fact table F that correspond to dimension tables. That is, S={C1,...Cn} and L=n. 2. Calculate the density of all sets of L-1 columns in S. 3. Find the lowest density. Determine which column is not in the set of columns with the lowest density. That is, find column Cm in S, such that for every Ci in S, density(S-{Cm})<density(S-{Ci}). 4. Make Cm the Lth column of the index. 5. Remove Cm from S. 6. Decrement L by 1. 7. Repeat steps 2 through 6 n-2 times. The remaining column after iteration n-2 is the first column of the index. Example of determining column order for a fact table index: Suppose that a star schema has three dimension tables with the following cardinalities:
cardD1=2000 cardD2=500 cardD3=100
Now suppose that the cardinalities of single columns and pairs of columns in the fact table are:
cardC1=2000 cardC2=433 cardC3=100 cardC12=625000 cardC13=196000 cardC23=994
Determine the best multi-column index for this star schema. Step 1: Calculate the density of all pairs of columns in the fact table:
density(C1,C2)=625000(2000*500)=0.625 density(C1,C3)=196000(2000*100)=0.98 density(C2,C3)=994(500*100)=0.01988
Step 2: Find the pair of columns with the lowest density. That pair is (C2,C3). Determine which column of the fact table is not in that pair. That column is C1. Step 3: Make column C1 the third column of the index. Step 4: Repeat steps 1 through 3 to determine the second and first columns of the index key:
density(C2)=433500=0.866 density(C3)=100100=1.0
The column with the lowest density is C2. Therefore, C3 is the second column of the index. The remaining column, C2, is the first column of the index. That is, the best order for the multi-column index is C2, C3, C1.
784
think that the join sequence is inefficient, try rearranging the order of the tables and views in the FROM clause to match a join sequence that might perform better.
This query has a problem with data correlation. DB2 does not know that 50% of the parts that were made in Center 3 are still in Center 3. The problem was circumvented by making a predicate nonindexable. But suppose that hundreds of users are writing queries similar to that query. Having all users change their queries would be impossible. In this type of situation, the best solution is to change the catalog statistics. For the query in Figure 227 on page 781, you can update the catalog statistics in one of two ways: v Run the RUNSTATS utility, and request statistics on the correlated columns W_FROM and W_NOW. This is the preferred method. See the discussion of maintaining statistics in the catalog in Part 5 (Volume 2) of DB2 Administration Guide and Part 2 of DB2 Utility Guide and Reference for more information. v Update the catalog statistics manually. Updating the catalog to adjust for correlated columns: One catalog table that you can update is SYSIBM.SYSCOLDIST, which gives information about a column or set of columns in a table. Assume that because columns W_NOW and W_FROM are correlated, only 100 distinct values exist for the combination of the two columns, rather than 2500 (50 for W_FROM * 50 for W_NOW). Insert a row like this to indicate the new cardinality:
INSERT INTO SYSIBM.SYSCOLDIST (FREQUENCY, FREQUENCYF, IBMREQD, TBOWNER, TBNAME, NAME, COLVALUE, TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS) VALUES(0, -1, N, USRT001,PART_HISTORY,W_FROM, , C,100,X00040003,2);
| |
You can also use the RUNSTATS utility to put this information in SYSCOLDIST. See DB2 Utility Guide and Reference for more information.
Chapter 26. Tuning your queries
785
You tell DB2 about the frequency of a certain combination of column values by updating SYSIBM.SYSCOLDIST. For example, you can indicate that 1% of the rows in PART_HISTORY contain the values 3 for W_FROM and 3 for W_NOW by inserting this row into SYSCOLDIST:
INSERT INTO SYSIBM.SYSCOLDIST (FREQUENCY, FREQUENCYF, STATSTIME, IBMREQD, TBOWNER, TBNAME, NAME, COLVALUE, TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS) VALUES(0, .0100, 1996-12-01-12.00.00.000000,N, USRT001,PART_HISTORY,W_FROM,X00800000030080000003, F,-1,X00040003,2);
Updating the catalog for joins with table functions: Updating catalog statistics might cause extreme performance problems if the statistics are not updated correctly. Monitor performance, and be prepared to reset the statistics to their original values if performance problems arise.
| |
# # #
Tables with default statistics for NPAGES (NPAGES =-1) are presumed to have 501 pages. For such tables, DB2 will favor matching index access only when NPGTHRSH is set above 501. Recommendation: Before you use NPGTHRSH, be aware that in some cases, matching index access can be more costly than a table space scan or nonmatching index access. Specify a small value for NPGTHRSH (10 or less), which limits the
786
| | | | | | | | | | | | | | | | |
number of tables for which DB2 favors matching index access. If you need to use matching index access only for specific tables, create or alter those tables with the VOLATILE parameter, rather than using the system-wide NPGTHRSH parameter. See Favoring index access on page 778.
787
788
789
# # # # #
response time from adding processor resources, and estimating the amount of time a utility job will take to run. DB2 Estimator for Windows can be downloaded from the Web. v DB2-supplied EXPLAIN stored procedure. Users with authority to run EXPLAIN directly can obtain access path information by calling the DB2-supplied EXPLAIN stored procedure. For more information about the DB2-supplied EXPLAIN stored procedure, see Appendix J, DB2-supplied stored procedures, on page 1127. Chapter overview: This chapter includes the following topics: v Obtaining PLAN_TABLE information from EXPLAIN v Asking questions about data access on page 800 v Interpreting access to a single table on page 809 v Interpreting access to two or more tables (join) on page 815 v Interpreting data prefetch on page 830 v Determining sort activity on page 834 v Processing for views and nested table expressions on page 836 v Estimating a statements cost on page 842 See also Chapter 28, Parallel operations and query performance, on page 847.
790
Creating PLAN_TABLE
| | Before you can use EXPLAIN, a PLAN_TABLE must be created to hold the results of EXPLAIN. A copy of the statements that are needed to create the table are in the DB2 sample library, under the member name DSNTESC. (Unless you need the information that they provide, you do not need to create a function table or statement table to use EXPLAIN.) Figure 228 on page 792 shows most current format of a plan table, which consists of 58 columns. Table 101 on page 794 shows the content of each column.
| |
791
| | |
| |
| | # # # #
| |
| | | | | | |
CREATE TABLE userid.PLAN_TABLE (QUERYNO INTEGER QBLOCKNO SMALLINT APPLNAME CHAR(8) PROGNAME VARCHAR(128) PLANNO SMALLINT METHOD SMALLINT CREATOR VARCHAR(128) TNAME VARCHAR(128) TABNO SMALLINT ACCESSTYPE CHAR(2) MATCHCOLS SMALLINT ACCESSCREATOR VARCHAR(128) ACCESSNAME VARCHAR(128) INDEXONLY CHAR(1) SORTN_UNIQ CHAR(1) SORTN_JOIN CHAR(1) SORTN_ORDERBY CHAR(1) SORTN_GROUPBY CHAR(1) SORTC_UNIQ CHAR(1) SORTC_JOIN CHAR(1) SORTC_ORDERBY CHAR(1) SORTC_GROUPBY CHAR(1) TSLOCKMODE CHAR(3) TIMESTAMP CHAR(16) REMARKS VARCHAR(762) PREFETCH CHAR(1) COLUMN_FN_EVAL CHAR(1) MIXOPSEQ SMALLINT VERSION VARCHAR(64) COLLID VARCHAR(128) ACCESS_DEGREE SMALLINT ACCESS_PGROUP_ID SMALLINT JOIN_DEGREE SMALLINT JOIN_PGROUP_ID SMALLINT SORTC_PGROUP_ID SMALLINT SORTN_PGROUP_ID SMALLINT PARALLELISM_MODE CHAR(1) MERGE_JOIN_COLS SMALLINT CORRELATION_NAME VARCHAR(128) PAGE_RANGE CHAR(1) JOIN_TYPE CHAR(1) GROUP_MEMBER CHAR(8) IBM_SERVICE_DATA VARCHAR(254) WHEN_OPTIMIZE CHAR(1) QBLOCK_TYPE CHAR(6) BIND_TIME TIMESTAMP OPTHINT VARCHAR(128) HINT_USED VARCHAR(128) PRIMARY_ACCESSTYPE CHAR(1) PARENT_QBLOCKNO SMALLINT TABLE_TYPE CHAR(1) TABLE_ENCODE CHAR(1) TABLE_SCCSID SMALLINT TABLE_MCCSID SMALLINT TABLE_DCCSID SMALLINT ROUTINE_ID INTEGER CTEREF SMALLINT STMTTOKEN VARCHAR(240)) IN database-name.table-space-name CCSID EBCDIC; Figure 228. 58-column format of PLAN_TABLE
NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT , , , , , , , , , NOT NOT NOT FOR NOT NOT NOT NOT NOT NOT NOT , NOT NOT NOT NOT NOT NOT
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL WITH NULL WITH NULL WITH NULL WITH NULL WITH
NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, BIT DATA NOT NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL NULL NULL NULL NULL NULL WITH WITH WITH WITH WITH WITH DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
792
| | | | | | | | | | | | |
Your plan table can use many other formats with fewer columns, as shown in Figure 229. However, use the 58-column format because it gives you the most information. If you alter an existing plan table with fewer than 58 columns to the 58-column format: v If they exist, change the data type of columns: PROGNAME, CREATOR, TNAME, ACCESSTYPE, ACCESSNAME, REMARKS, COLLID, CORRELATION_NAME, IBM_SERVICE_DATA, OPTHINT, and HINT_USED. Use the values shown in Figure 228 on page 792. v Add the missing columns to the table. Use the column definitions shown in Figure 228 on page 792. For most columns added, specify NOT NULL WITH DEFAULT so that default values are included for the rows in the table. However, as the figure shows, certain columns do allow nulls. Do not specify those columns as NOT NULL WITH DEFAULT.
QUERYNO INTEGER NOT NULL QBLOCKNO SMALLINT NOT NULL APPLNAME CHAR(8) NOT NULL PROGNAME CHAR(8) NOT NULL PLANNO SMALLINT NOT NULL METHOD SMALLINT NOT NULL CREATOR CHAR(8) NOT NULL TNAME CHAR(18) NOT NULL TABNO SMALLINT NOT NULL ACCESSTYPE CHAR(2) NOT NULL MATCHCOLS SMALLINT NOT NULL ACCESSCREATOR CHAR(8) NOT NULL ACCESSNAME CHAR(18) NOT NULL INDEXONLY CHAR(1) NOT NULL SORTN_UNIQ CHAR(1) NOT NULL SORTN_JOIN CHAR(1) NOT NULL SORTN_ORDERBY CHAR(1) NOT NULL SORTN_GROUPBY CHAR(1) NOT NULL SORTC_UNIQ CHAR(1) NOT NULL SORTC_JOIN CHAR(1) NOT NULL SORTC_ORDERBY CHAR(1) NOT NULL SORTC_GROUPBY CHAR(1) NOT NULL TSLOCKMODE CHAR(3) NOT NULL TIMESTAMP CHAR(16) NOT NULL REMARKS VARCHAR(254) NOT NULL ----------------25 column format--------------PREFETCH CHAR(1) NOT NULL WITH DEFAULT COLUMN_FN_EVAL CHAR(1) NOT NULL WITH DEFAULT MIXOPSEQ SMALLINT NOT NULL WITH DEFAULT ----------------28 column format--------------VERSION VARCHAR(64) NOT NULL WITH DEFAULT COLLID CHAR(18) NOT NULL WITH DEFAULT ----------------30 column format--------------ACCESS_DEGREE SMALLINT ACCESS_PGROUP_ID SMALLINT JOIN_DEGREE SMALLINT JOIN_PGROUP_ID SMALLINT ------------------34 column format---------------SORTC_PGROUP_ID SMALLINT SORTN_PGROUP_ID SMALLINT PARALLELISM_MODE CHAR(1) MERGE_JOIN_COLS SMALLINT CORRELATION_NAME CHAR(18) PAGE_RANGE CHAR(1) NOT NULL WITH DEFAULT JOIN_TYPE CHAR(1) NOT NULL WITH DEFAULT GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT IBM_SERVICE_DATA VARCHAR(254) NOT NULL WITH DEFAULT ------------------43 column format---------------WHEN_OPTIMIZE CHAR(1) NOT NULL WITH DEFAULT QBLOCK_TYPE CHAR(6) NOT NULL WITH DEFAULT BIND_TIME TIMESTAMP NOT NULL WITH DEFAULT ------------------46 column format---------------OPTHINT CHAR(8) NOT NULL WITH DEFAULT HINT_USED CHAR(8) NOT NULL WITH DEFAULT PRIMARY_ACCESSTYPE CHAR(1) NOT NULL WITH DEFAULT ------------------49 column format---------------PARENT_QBLOCKNO SMALLINT NOT NULL WITH DEFAULT TABLE_TYPE CHAR(1) ------------------51 column format----------------
Table 101 on page 794 shows the descriptions of the columns in PLAN_TABLE.
793
Table 101. Descriptions of columns in PLAN_TABLE Column Name QUERYNO Description A number intended to identify the statement being explained. For a row produced by an EXPLAIN statement, specify the number in the QUERYNO clause. For a row produced by non-EXPLAIN statements, specify the number using the QUERYNO clause, which is an optional part of the SELECT, INSERT, UPDATE and DELETE statement syntax. Otherwise, DB2 assigns a number based on the line number of the SQL statement in the source program. When the values of QUERYNO are based on the statement number in the source program, values greater than 32767 are reported as 0. However, in a very long program, the value is not guaranteed to be unique. If QUERYNO is not unique, the value of TIMESTAMP is unique.
| QBLOCKNO |
APPLNAME
A number that identifies each query block within a query. The value of the numbers are not in any particular order, nor are they necessarily consecutive. The name of the application plan for the row. Applies only to embedded EXPLAIN statements executed from a plan or to statements explained when binding a plan. Blank if not applicable. The name of the program or package containing the statement being explained. Applies only to embedded EXPLAIN statements and to statements explained as the result of binding a plan or package. Blank if not applicable. The number of the step in which the query indicated in QBLOCKNO was processed. This column indicates the order in which the steps were executed. A number (0, 1, 2, 3, or 4) that indicates the join method used for the step: 0 1 2 3 First table accessed, continuation of previous table accessed, or not used. Nested loop join. For each row of the present composite table, matching rows of a new table are found and joined. Merge scan join. The present composite table and the new table are scanned in the order of the join columns, and matching rows are joined. Sorts needed by ORDER BY, GROUP BY, SELECT DISTINCT, UNION, a quantified predicate, or an IN predicate. This step does not access a new table. Hybrid join. The current composite table is scanned in the order of the join-column rows of the new table. The new table is accessed using list prefetch.
PROGNAME
PLANNO METHOD
CREATOR
The creator of the new table accessed in this step, blank if METHOD is 3. The name of a table, materialized query table, created or declared temporary table, materialized view, or materialized table expression. The value is blank if METHOD is 3. The column can also contain the name of a table in the form DSNWFQB(qblockno). DSNWFQB(qblockno) is used to represent the intermediate result of a UNION ALL or an outer join that is materialized. If a view is merged, the name of the view does not appear. Values are for IBM use only.
| TNAME |
TABNO
794
Table 101. Descriptions of columns in PLAN_TABLE (continued) Column Name ACCESSTYPE Description The method of accessing the new table: I By an index (identified in ACCESSCREATOR and ACCESSNAME) I1 By a one-fetch index scan M By a multiple index scan (followed by MX, MI, or MU) MI By an intersection of multiple indexes MU By a union of multiple indexes MX By an index scan on the index named in ACCESSNAME N By an index scan when the matching predicate contains the IN keyword R By a table space scan RW By a work file scan of the result of a materialized user-defined table function T By a sparse index (star join work files) V By buffers for an INSERT statement within a SELECT blank Not applicable to the current row For ACCESSTYPE I, I1, N, or MX, the number of index keys used in an index scan; otherwise, 0. For ACCESSTYPE I, I1, N, or MX, the creator of the index; otherwise, blank. For ACCESSTYPE I, I1, N, or MX, the name of the index; otherwise, blank. Whether access to an index alone is enough to carry out the step, or whether data too must be accessed. Y=Yes; N=No. Whether the new table is sorted to remove duplicate rows. Y=Yes; N=No. Whether the new table is sorted for join method 2 or 4. Y=Yes; N=No. Whether the new table is sorted for ORDER BY. Y=Yes; N=No. Whether the new table is sorted for GROUP BY. Y=Yes; N=No. Whether the composite table is sorted to remove duplicate rows. Y=Yes; N=No. Whether the composite table is sorted for join method 1, 2 or 4. Y=Yes; N=No. Whether the composite table is sorted for an ORDER BY clause or a quantified predicate. Y=Yes; N=No. Whether the composite table is sorted for a GROUP BY clause. Y=Yes; N=No. An indication of the mode of lock to be acquired on either the new table, or its table space or table space partitions. If the isolation can be determined at bind time, the values are: IS Intent share lock IX Intent exclusive lock S Share lock U Update lock X Exclusive lock SIX Share with intent exclusive lock N UR isolation; no lock If the isolation cannot be determined at bind time, then the lock mode determined by the isolation at run time is shown by the following values. NS For UR isolation, no lock; for CS, RS, or RR, an S lock. NIS For UR isolation, no lock; for CS, RS, or RR, an IS lock. NSS For UR isolation, no lock; for CS or RS, an IS lock; for RR, an S lock. SS For UR, CS, or RS isolation, an IS lock; for RR, an S lock. The data in this column is right justified. For example, IX appears as a blank followed by I followed by X. If the column contains a blank, then no lock is acquired. TIMESTAMP Usually, the time at which the row is processed, to the last .01 second. If necessary, DB2 adds .01 second to the value to ensure that rows for two successive queries have different values. A field into which you can insert any character string of 762 or fewer characters.
Chapter 27. Using EXPLAIN to improve SQL performance
| | |
MATCHCOLS ACCESSCREATOR ACCESSNAME INDEXONLY SORTN_UNIQ SORTN_JOIN SORTN_ORDERBY SORTN_GROUPBY SORTC_UNIQ SORTC_JOIN SORTC_ORDERBY SORTC_GROUPBY TSLOCKMODE
REMARKS
795
Table 101. Descriptions of columns in PLAN_TABLE (continued) Column Name Description Whether data pages are to be read in advance by prefetch: S L D blank Pure sequential prefetch Prefetch through a page list Possible candidate for dynamic prefetch Unknown or no prefetch
When an SQL aggregate function is evaluated. R = while the data is being read from the table or index; S = while performing a sort to satisfy a GROUP BY clause; blank = after data retrieval and after any sorts. The sequence number of a step in a multiple index operation. 1, 2, ... n 0 For the steps of the multiple index procedure (ACCESSTYPE is MX, MI, or MU.) For any other rows (ACCESSTYPE is I, I1, M, N, R, or blank.)
The version identifier for the package. Applies only to an embedded EXPLAIN statement executed from a package or to a statement that is explained when binding a package. Blank if not applicable. The collection ID for the package. Applies only to an embedded EXPLAIN statement that is executed from a package or to a statement that is explained when binding a package. Blank if not applicable. The value DSNDYNAMICSQLCACHE indicates that the row is for a cached statement.
Note: The following nine columns, from ACCESS_DEGREE through CORRELATION_NAME, contain the null value if the plan or package was bound using a plan table with fewer than 43 columns. Otherwise, each of them can contain null if the method it refers to does not apply. ACCESS_DEGREE The number of parallel tasks or operations activated by a query. This value is determined at bind time; the actual number of parallel operations used at execution time could be different. This column contains 0 if there is a host variable. The identifier of the parallel group for accessing the new table. A parallel group is a set of consecutive operations, executed in parallel, that have the same number of parallel tasks. This value is determined at bind time; it could change at execution time. The number of parallel operations or tasks used in joining the composite table with the new table. This value is determined at bind time and can be 0 if there is a host variable. The actual number of parallel operations or tasks used at execution time could be different. The identifier of the parallel group for joining the composite table with the new table. This value is determined at bind time; it could change at execution time. The parallel group identifier for the parallel sort of the composite table. The parallel group identifier for the parallel sort of the new table. The kind of parallelism, if any, that is used at bind time: I Query I/O parallelism C Query CP parallelism X Sysplex query parallelism The number of columns that are joined during a merge scan join (Method=2). The correlation name of a table or view that is specified in the statement. If there is no correlation name, then the column is null. Whether the table qualifies for page range screening, so that plans scan only the partitions that are needed. Y = Yes; blank = No.
ACCESS_PGROUP_ID
JOIN_DEGREE
MERGE_JOIN_COLS
# CORRELATION_NAME #
PAGE_RANGE
796
Table 101. Descriptions of columns in PLAN_TABLE (continued) Column Name JOIN_TYPE Description The type of join: F FULL OUTER JOIN L LEFT OUTER JOIN S STAR JOIN blank INNER JOIN or no join RIGHT OUTER JOIN converts to a LEFT OUTER JOIN when you use it, so that JOIN_TYPE contains L. The member name of the DB2 that executed EXPLAIN. The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed. Values are for IBM use only. When the access path was determined: blank B At bind time, using a default filter factor for any host variables, parameter markers, or special registers. At bind time, using a default filter factor for any host variables, parameter markers, or special registers; however, the statement is reoptimized at run time using input variable values for input host variables, parameter markers, or special registers. The bind option REOPT(ALWAYS) or REOPT(ONCE) must be specified for reoptimization to occur. At run time, using input variables for any host variables, parameter markers, or special registers. The bind option REOPT(ALWAYS) or REOPT(ONCE) must be specified for this to occur.
QBLOCK_TYPE
For each query block, an indication of the type of SQL operation performed. For the outermost query, this column identifies the statement type. Possible values: SELECT SELECT INSERT INSERT UPDATE UPDATE DELETE DELETE SELUPD SELECT with FOR UPDATE OF DELCUR DELETE WHERE CURRENT OF CURSOR UPDCUR UPDATE WHERE CURRENT OF CURSOR CORSUB Correlated subselect or fullselect NCOSUB Noncorrelated subselect or fullselect TABLEX Table expression TRIGGR WHEN clause on CREATE TRIGGER UNION UNION UNIONA UNION ALL For static SQL statements, the time at which the plan or package for this statement or query block was bound. For cached dynamic SQL statements, the time at which the which the statement entered the cache. For static and cached dynamic SQL statements, this is a full-precision timestamp value. For non-cached dynamic SQL statements, this is the value contained in the TIMESTAMP column of PLAN_TABLE appended by 4 zeroes. A string that you use to identify this row as an optimization hint for DB2. DB2 uses this row as input when choosing an access path. If DB2 used one of your optimization hints, it puts the identifier for that hint (the value in OPTHINT) in this column. Indicates whether direct row access will be attempted first: D DB2 will try to use direct row access. If DB2 cannot use direct row access at run time, it uses the access path described in the ACCESSTYPE column of PLAN_TABLE. DB2 will not try to use direct row access.
# BIND_TIME # # # # #
OPTHINT HINT_USED PRIMARY_ACCESSTYPE
blank
797
Table 101. Descriptions of columns in PLAN_TABLE (continued) Column Name PARENT_QBLOCKNO TABLE_TYPE Description A number that indicates the QBLOCKNO of the parent query block. The type of new table: B Buffers for an INSERT statement within a SELECT C Common table expression F Table function M Materialized query table Q Temporary intermediate result table (not materialized). For the name of a view or nested table expression, a value of Q indicates that the materializaition was virtual and not actual. Materialization can be virtual when the view or nested table expression definition contains a UNION ALL that is not distributed. R Recursive common table expression T Table W Work file The value of the column is null if the query uses GROUP BY, ORDER BY, or DISTINCT, which requires an implicit sort. The encoding scheme of the table. If the table has a single CCSID set, possible values are: A ASCII E EBCDIC U Unicode M is the value of the column when the table contains muliple CCSID set, the value of the column is M. The SBCS CCSID value of the table. If column TABLE_ENCODE is M, the value is 0. The mixed CCSID value of the table. If column TABLE_ENCODE is M, the value is 0 The DBCS CCSID value of the table. If column TABLE_ENCODE is M, the value is 0. Values are for IBM use only. If the referenced table is a common table expression, the value is the top-level query block number. User-specified statement token.
| | |
798
You can execute EXPLAIN either statically from an application program, or dynamically, using QMF or SPUFI. For instructions and for details of the authorization that you need on PLAN_TABLE, see DB2 SQL Reference.
The result of the ORDER BY clause shows whether there are: v Multiple QBLOCKNOs within a QUERYNO v Multiple PLANNOs within a QBLOCKNO
Chapter 27. Using EXPLAIN to improve SQL performance
799
v Multiple MIXOPSEQs within a PLANNO All rows with the same non-zero value for QBLOCKNO and the same value for QUERYNO relate to a step within the query. QBLOCKNOs are not necessarily executed in the order shown in PLAN_TABLE. But within a QBLOCKNO, the PLANNO column gives the substeps in the order they execute. For each substep, the TNAME column identifies the table accessed. Sorts can be shown as part of a table access or as a separate step. What if QUERYNO=0? For entries that contain QUERYNO=0, use the timestamp, which is guaranteed to be unique, to distinguish individual statements.
COLLID gives the COLLECTION name, and PROGNAME gives the PACKAGE_ID. The following query to a plan table return the rows for all the explainable statements in a package in their logical order:
SELECT * FROM JOE.PLAN_TABLE WHERE PROGNAME = PACK1 AND COLLID = COLL1 AND VERSION = PROD1 ORDER BY QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ;
As explained in this section, they can be answered in terms of values in columns of a plan table.
800
DB2 processes the query by performing the following steps: 1. DB2 retrieves all the qualifying record identifiers (RIDs) where C1=1, by using index IX1. 2. DB2 retrieves all the qualifying RIDs where C2=1, by using index IX2. The intersection of these lists is the final set of RIDs. 3. DB2 accesses the data pages that are needed to retrieve the qualified rows by using the final RID list. The plan table for this example is shown in Table 102.
Table 102. PLAN_TABLE output for example with intersection (AND) operator ACCESSTNAME TYPE T T T T M MX MX MI MATCHCOLS 0 1 1 0 IX1 IX2 ACCESSNAME INDEXONLY N Y Y N PREFETCH L MIXOPSEQ 0 1 2 3
In this case, the same index can be used more than once in a multiple index access because more than one predicate could be matching. DB2 processes the query by performing the following steps: 1. DB2 retrieves all RIDs where C1 is between 100 and 199, using index IX1. 2. DB2 retrieves all RIDs where C1 is between 500 and 599, again using IX1. The union of those lists is the final set of RIDs. 3. DB2 retrieves the qualified rows by using the final RID list.
801
The index XEMP5 is the chosen access path for this query, with MATCHCOLS = 3. Two equal predicates are on the first two columns and a range predicate is on the third column. Though the index has four columns in the index, only three of them can be considered matching columns.
| | | | | |
802
When an SQL application uses index-only access for a ROWID column, the application claims the table space or table space partition. As a result, contention may occur between the SQL application and a utility that drains the table space or partition. Index-only access to a table for a ROWID column is not possible if the associated table space or partition is in an incompatible restrictive state. For example, an SQL application can make a read claim on the table space only if the restrictive state allows readers.
Searching for propagated rows: If rows are propagated from one table to another, do not expect to use the same row ID value from the source table to search for the same row in the target table, or vice versa. This does not work when direct row access is the access path chosen.
803
Example: Assume that the host variable in the following statement contains a row ID from SOURCE:
SELECT * FROM TARGET WHERE ID = :hv_rowid
Because the row ID location is not the same as in the source table, DB2 will probably not find that row. Search on another column to retrieve the row you want.
Reverting to ACCESSTYPE
Although DB2 might plan to use direct row access, circumstances can cause DB2 to not use direct row access at run time. DB2 remembers the location of the row as of the time it is accessed. However, that row can change locations (such as after a REORG) between the first and second time it is accessed, which means that DB2 cannot use direct row access to find the row on the second access attempt. Instead of using direct row access, DB2 uses the access path that is shown in the ACCESSTYPE column of PLAN_TABLE. If the predicate you are using to do direct row access is not indexable and if DB2 is unable to use direct row access, then DB2 uses a table space scan to find the row. This can have a profound impact on the performance of applications that rely on direct row access. Write your applications to handle the possibility that direct row access might not be used. Some options are to: v Ensure that your application does not try to remember ROWID columns across reorganizations of the table space. When your application commits, it releases its claim on the table space; it is possible that a REORG can run and move the row, which disables direct row access. Plan your commit processing accordingly; use the returned row ID value before committing, or re-select the row ID value after a commit is issued. If you are storing ROWID columns from another table, update those values after the table with the ROWID column is reorganized. v Create an index on the ROWID column, so that DB2 can use the index if direct row access is disabled. v Supplement the ROWID column predicate with another predicate that enables DB2 to use an existing index on the table. For example, after reading a row, an application might perform the following update:
EXEC SQL UPDATE EMP SET SALARY = :hv_salary + 1200 WHERE EMP_ROWID = :hv_emp_rowid AND EMPNO = :hv_empno;
If an index exists on EMPNO, DB2 can use index access if direct access fails. The additional predicate ensures DB2 does not revert to a table space scan.
804
direct row access is used. If direct row access fails, DB2 does not revert to RID list processing; instead it reverts to the backup access type.
805
/**********************************************************/ /* Update columns SALARY, PICTURE, and RESUME. Use the */ /* ROWID value you obtained in the previous statement */ /* to access the row you want to update. */ /* smiley_face and update_resume are */ /* user-defined functions that are not shown here. */ /**********************************************************/ EXEC SQL UPDATE EMPDATA SET SALARY = :hv_salary + 1200, PICTURE = smiley_face(:hv_picture), RESUME = update_resume(:hv_resume) WHERE EMP_ROWID = :hv_emp_rowid; /**********************************************************/ /* Use the ROWID value to obtain the employee ID from the */ /* same record. */ /**********************************************************/ EXEC SQL SELECT E.ID INTO :hv_id FROM EMPDATA E WHERE E.EMP_ROWID = :hv_emp_rowid; /**********************************************************/ /* Use the ROWID value to delete the employee record */ /* from the table. */ /**********************************************************/ EXEC SQL DELETE FROM EMPDATA WHERE EMP_ROWID = :hv_emp_rowid; Figure 230. Example of using a row ID value for direct row access (Part 2 of 2)
Assume that table T has a partitioned index on column C1 and that values of C1 between 2002 and 3280 all appear in partitions 3 and 4 and the values between 6000 and 8000 appear in partitions 8 and 9. Assume also that T has another index on column C2. DB2 could choose any of these access methods:
806
v A matching index scan on column C1. The scan reads index values and data only from partitions 3, 4, 8, and 9. (PAGE_RANGE=N) v A matching index scan on column C2. (DB2 might choose that if few rows have C2=6.) The matching index scan reads all RIDs for C2=6 from the index on C2 and corresponding data pages from partitions 3, 4, 8, and 9. (PAGE_RANGE=Y) v A table space scan on T. DB2 avoids reading data pages from any partitions except 3, 4, 8 and 9. (PAGE_RANGE=Y)
| |
Non-null values in columns ACCESS_DEGREE and JOIN_DEGREE indicate to what degree DB2 plans to use parallel operations. At execution time, however, DB2 might not actually use parallelism, or it might use fewer operations in parallel than were originally planned. For a more complete description, see Chapter 28, Parallel operations and query performance, on page 847. For more information about Sysplex query parallelism, see Chapter 6 of DB2 Data Sharing: Planning and Administration.
807
METHOD 3 sorts: These are used for ORDER BY, GROUP BY, SELECT DISTINCT, UNION, or a quantified predicate. A quantified predicate is col = ANY (fullselect) or col = SOME (fullselect). They are indicated on a separate row. A single row of the plan table can indicate two sorts of a composite table, but only one sort is actually done. SORTC_UNIQ and SORTC_ORDERBY: SORTC_UNIQ indicates a sort to remove duplicates, as might be needed by a SELECT statement with DISTINCT or UNION. SORTC_ORDERBY usually indicates a sort for an ORDER BY clause. But SORTC_UNIQ and SORTC_ORDERBY also indicate when the results of a noncorrelated subquery are sorted, both to remove duplicates and to order the results. One sort does both the removal and the ordering.
Generally, values of R and S are considered better for performance than a blank. Use variance and standard deviation with care: The VARIANCE and STDDEV functions are always evaluated late (that is, COLUMN_FN_EVAL is blank). This causes other functions in the same query block to be evaluated late as well. For example, in the following query, the sum function is evaluated later than it would be if the variance function was not present:
SELECT SUM(C1), VARIANCE(C1) FROM T1;
| | | |
808
| | | | | | | | | | | | | | | | | | | | | | |
Table 104 shows the corresponding plan table for the WHEN clause.
Table 104. Plan table for the WHEN clause QBLOCKNO 1 2 PLANNO 1 1 NT R TABLE ACCESSTYPE QBLOCK_TYPE TRIGGR NCOSUB PARENT_QBLOCKNO 0 1
In this case, at least every row in T must be examined to determine whether the value of C1 matches the given value.
809
810
Two matching columns occur in this example. The first one comes from the predicate C1=1, and the second one comes from C2>1. The range predicate on C2 prevents C3 from becoming a matching column.
Index screening
In index screening, predicates are specified on index key columns but are not part of the matching columns. Those predicates improve the index access by reducing the number of rows that qualify while searching the index. For example, with an index on T(C1,C2,C3,C4) in the following SQL statement, C3>0 and C4=2 are index screening predicates.
SELECT * FROM T WHERE C1 = 1 AND C3 > 0 AND C4 = 2 AND C5 = 8;
811
The predicates can be applied on the index, but they are not matching predicates. C5=8 is not an index screening predicate, and it must be evaluated when data is retrieved. The value of MATCHCOLS in the plan table is 1. EXPLAIN does not directly tell when an index is screened; however, if MATCHCOLS is less than the number of index key columns, it indicates that index screening is possible.
| |
The plan table shows MATCHCOLS = 3 and ACCESSTYPE = N. The IN-list scan is performed as the following three matching index scans:
(C1=1,C2=1,C3>0), (C1=1,C2=2,C3>0), (C1=1,C2=3,C3>0)
Parallelism is supported for queries that involve IN-list index access. These queries used to run sequentially in previous releases of DB2, although parallelism could have been used when the IN-list access was for the inner table of a parallel group. Now, in environments in which parallelism is enabled, you can see a reduction in elapsed time for queries that involve IN-list index access for the outer table of a parallel group.
812
as an extension to list prefetch with more complex RID retrieval operations in its first phase. The complex operators are union and intersection. DB2 chooses multiple index access for the following query:
SELECT * FROM EMP WHERE (AGE = 34) OR (AGE = 40 AND JOB = MANAGER);
For this query: v EMP is a table with columns EMPNO, EMPNAME, DEPT, JOB, AGE, and SAL. v EMPX1 is an index on EMP with key column AGE. v EMPX2 is an index on EMP with key column JOB. The plan table contains a sequence of rows describing the access. For this query, ACCESSTYPE uses the following values: Value M MX MI MU Meaning Start of multiple index access processing Indexes are to be scanned for later union or intersection An intersection (AND) is performed A union (OR) is performed
The following steps relate to the previous query and the values shown for the plan table in Table 106: 1. Index EMPX1, with matching predicate AGE= 34, provides a set of candidates for the result of the query. The value of MIXOPSEQ is 1. 2. Index EMPX1, with matching predicate AGE = 40, also provides a set of candidates for the result of the query. The value of MIXOPSEQ is 2. 3. The first intersection (AND) is done, and the value of MIXOPSEQ is 3. This MI removes the two previous candidate lists (produced by MIXOPSEQs 2 and 3) by intersecting them to form an intermediate candidate list, IR1, which is not shown in PLAN_TABLE. 4. Index EMPX2, with matching predicate JOB=MANAGER, also provides a set of candidates for the result of the query. The value of MIXOPSEQ is 4. 5. The last step, where the value MIXOPSEQ is 5, is a union (OR) of the two remaining candidate lists, which are IR1 and the candidate list produced by MIXOPSEQ 1. This final union gives the result for the query.
Table 106. Plan table output for a query that uses multiple indexes. Depending on the filter factors of the predicates, the access steps can appear in a different order. PLANNO 1 1 1 1 1 1 TNAME EMP EMP EMP EMP EMP EMP ACCESSTYPE M MX MX MI MX MU MATCHCOLS 0 1 1 0 1 0 EMPX2 EMPX1 EMPX1 ACCESSNAME PREFETCH L MIXOPSEQ 0 1 2 3 4 5
# # # # # #
In this example, the steps in the multiple index access follow the physical sequence of the predicates in the query. This is not always the case. The multiple index steps are arranged in an order that uses RID pool storage most efficiently and for the least amount of time.
Chapter 27. Using EXPLAIN to improve SQL performance
813
814
Unique Index1: (C1, C2) Unique Index2: (C2, C1, C3) SELECT C3 FROM T WHERE C1 = 1 AND C2 = 5;
Index1 is a fully matching equal unique index. However, Index2 is also an equal unique index even though it is not fully matching. Index2 is the better choice because, in addition to being equal and unique, it also provides index-only access.
| | |
815
A join operation can involve more than two tables. In these cases, the operation is carried out in a series of steps. For non-star joins, each step joins only two tables. Example: Figure 231 shows a two-step join operation.
(Method 1) Nested loop join
Composite
TJ
TK
New
Composite
Work File
TL
New
DB2 performs the following steps to complete the join operation: 1. Accesses the first table (METHOD=0), named TJ (TNAME), which becomes the composite table in step 2. 2. Joins the new table TK to TJ, forming a new composite table. 3. Sorts the new table TL (SORTN_JOIN=Y) and the composite table (SORTC_JOIN=Y), and then joins the two sorted tables. 4. Sorts the final composite table (TNAME is blank) into the desired order (SORTC_ORDERBY=Y). Figure 231 Table 107 and Table 108 show a subset of columns in a plan table for this join operation.
Table 107. Subset of columns for a two-step join operation METHOD 0 1 2 3 TNAME TJ TK TL ACCESSTYPE I I I MATCHCOLS 1 1 0 0 ACCESSNAME TJX1 TKX1 TLX1 INDEXONLY N N Y N TSLOCKMODE IS IS S
Table 108. Subset of columns for a two-step join operation SORTN UNIQ N N N N SORTN SORTN SORTN SORTC SORTC SORTC SORTC JOIN ORDERBY GROUPBY UNIQ JOIN ORDERBY GROUPBY N N Y N N N N N N N N N N N N N N N Y N N N N Y N N N N
816
Definitions: A join operation typically matches a row of one table with a row of another on the basis of a join condition. For example, the condition might specify that the value in column A of one table equals the value of column X in the other table (WHERE T1.A = T2.X). Two kinds of joins differ in what they do with rows in one table that do not match on the join condition with any row in the other table: v An inner join discards rows of either table that do not match any row of the other table. v An outer join keeps unmatched rows of one or the other table, or of both. A row in the composite table that results from an unmatched row is filled out with null values. As Table 109 shows, outer joins are distinguished by which unmatched rows they keep.
Table 109. Join types and kept unmatched rows Outer join type Left outer join Right outer join Full outer join Included unmatched rows The composite (outer) table The new (inner) table Both tables
Example: Suppose that you issue the following statement to explain an outer join:
EXPLAIN PLAN SET QUERYNO = 10 FOR SELECT PROJECT, COALESCE(PROJECTS.PROD#, PRODNUM) AS PRODNUM, PRODUCT, PART, UNITS FROM PROJECTS LEFT JOIN (SELECT PART, COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM, PRODUCTS.PRODUCT FROM PARTS FULL OUTER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD#) AS TEMP ON PROJECTS.PROD# = PRODNUM
Table 110 shows a subset of the plan table for the outer join.
Table 110. Plan table output for an example with outer joins QUERYNO 10 10 10 10 QBLOCKNO 1 1 2 2 PLANNO 1 2 1 2 TNAME PROJECTS TEMP PRODUCTS PARTS F L JOIN_TYPE
Column JOIN_TYPE identifies the type of outer join with one of these values: v F for FULL OUTER JOIN v L for LEFT OUTER JOIN v Blank for INNER JOIN or no join At execution, DB2 converts every right outer join to a left outer join; thus JOIN_TYPE never identifies a right outer join specifically. Materialization with outer join: Sometimes DB2 has to materialize a result table when an outer join is used in conjunction with other joins, views, or nested table expressions. You can tell when this happens by looking at the TABLE_TYPE and TNAME columns of the plan table. When materialization occurs, TABLE_TYPE
817
contains a W, and TNAME shows the name of the materialized table as DSNWFQB(xx), where xx is the number of the query block (QBLOCKNO) that produced the work file.
SELECT A, B, X, Y FROM (SELECT FROM OUTERT WHERE A=10) LEFT JOIN INNERT ON B=X; Left outer join using nested loop join Table Columns OUTERT A 10 10 10 10 10 B 3 1 2 6 1 X 5 3 2 1 2 9 7 INNERT Y A B C D E F G Composite A B X Y 10 10 10 10 10 10 3 1 2 2 6 1 3 1 2 2 1 B D C E D
find all matching rows in the inner table, by a table space or index scan.
The nested loop join produces this result, preserving the values of the outer table.
Method of joining
DB2 scans the composite (outer) table. For each row in that table that qualifies (by satisfying the predicates on that table), DB2 searches for matching rows of the new (inner) table. It concatenates any it finds with the current row of the composite table. If no rows match the current row, then: v For an inner join, DB2 discards the current row. v For an outer join, DB2 concatenates a row of null values. Stage 1 and stage 2 predicates eliminate unqualified rows during the join. (For an explanation of those types of predicate, see Stage 1 and stage 2 predicates on page 737.) DB2 can scan either table using any of the available access methods, including table space scan.
Performance considerations
The nested loop join repetitively scans the inner table. That is, DB2 scans the outer table once, and scans the inner table as many times as the number of qualifying rows in the outer table. Therefore, the nested loop join is usually the most efficient join method when the values of the join column passed to the inner table are in sequence and the index on the join column of the inner table is clustered, or the number of rows retrieved in the inner table through the index is small.
818
v Predicates with small filter factors reduce the number of qualifying rows in the outer table. v An efficient, highly clustered index exists on the join columns of the inner table. v The number of data pages accessed in the inner table is small. | | v No join columns exist. Hybrid and sort merge joins require join columns; nested loop joins do not. Example: left outer join: Figure 232 on page 818 illustrates a nested loop for a left outer join. The outer join preserves the unmatched row in OUTERT with values A=10 and B=6. The same join method for an inner join differs only in discarding that row. Example: one-row table priority: For a case like the following example, with a unique index on T1.C2, DB2 detects that T1 has only one row that satisfies the search condition. DB2 makes T1 the first table in a nested loop join.
SELECT * FROM T1, T2 WHERE T1.C1 = T2.C1 AND T1.C2 = 5;
Example: Cartesian join with small tables first: A Cartesian join is a form of nested loop join in which there are no join predicates between the two tables. DB2 usually avoids a Cartesian join, but sometimes it is the most efficient method, as in the following example. The query uses three tables: T1 has 2 rows, T2 has 3 rows, and T3 has 10 million rows.
SELECT * FROM T1, T2, T3 WHERE T1.C1 = T3.C1 AND T2.C2 = T3.C2 AND T3.C3 = 5;
Join predicates are between T1 and T3 and between T2 and T3. There is no join predicate between T1 and T2. Assume that 5 million rows of T3 have the value C3=5. Processing time is large if T3 is the outer table of the join and tables T1 and T2 are accessed for each of 5 million rows. However if all rows from T1 and T2 are joined, without a join predicate, the 5 million rows are accessed only six times, once for each row in the Cartesian join of T1 and T2. It is difficult to say which access path is the most efficient. DB2 evaluates the different options and could decide to access the tables in the sequence T1, T2, T3. Sorting the composite table: Your plan table could show a nested loop join that includes a sort on the composite table. DB2 might sort the composite table (the outer table in Figure 232) if the following conditions exist: v The join columns in the composite table and the new table are not in the same sequence. v The join column of the composite table has no index. v The index is poorly clustered. Nested loop join with a sorted composite table has the following performance advantages: v Uses sequential detection efficiently to prefetch data pages of the new table, reducing the number of synchronous I/O operations and the elapsed time.
819
v Avoids repetitive full probes of the inner table index by using the index look-aside.
Method of joining
Figure 233 illustrates a merge scan join.
SELECT A, B, X, Y FROM OUTER, INNER WHERE A=10 AND B=X; Merge scan join Condense and sort the outer table, or access it through an index on column B. Table Columns OUTER A 10 10 10 10 10 B 1 1 2 3 6 X 1 2 2 3 5 7 9 Condense and sort the inner table.
INNER Y D C E B A G F
Composite A B X Y 10 10 10 10 10 1 1 2 2 3 1 1 2 2 3 D D C E B
DB2 scans both tables in the order of the join columns. If no efficient indexes on the join columns provide the order, DB2 might sort the outer table, the inner table, or both. The inner table is put into a work file; the outer table is put into a work file only if it must be sorted. When a row of the outer table matches a row of the inner table, DB2 returns the combined rows. DB2 then reads another row of the inner table that might match the same row of the outer table and continues reading rows of the inner table as long as there is a match. When there is no longer a match, DB2 reads another row of the outer table. v If that row has the same value in the join column, DB2 reads again the matching group of records from the inner table. Thus, a group of duplicate records in the inner table is scanned as many times as there are matching records in the outer table. v If the outer row has a new value in the join column, DB2 searches ahead in the inner table. It can find any of the following rows: Unmatched rows in the inner table, with lower values in the join column. A new matching inner row. DB2 then starts the process again.
820
An inner row with a higher value of the join column. Now the row of the outer table is unmatched. DB2 searches ahead in the outer table, and can find any of the following rows: - Unmatched rows in the outer table. - A new matching outer row. DB2 then starts the process again. - An outer row with a higher value of the join column. Now the row of the inner table is unmatched, and DB2 resumes searching the inner table. If DB2 finds an unmatched row: For an inner join, DB2 discards the row. For a left outer join, DB2 discards the row if it comes from the inner table and keeps it if it comes from the outer table. For a full outer join, DB2 keeps the row. When DB2 keeps an unmatched row from a table, it concatenates a set of null values as if that matched from the other table. A merge scan join must be used for a full outer join.
Performance considerations
A full outer join by this method uses all predicates in the ON clause to match the two tables and reads every row at the time of the join. Inner and left outer joins use only stage 1 predicates in the ON clause to match the tables. If your tables match on more than one column, it is generally more efficient to put all the predicates for the matches in the ON clause, rather than to leave some of them in the WHERE clause. For an inner join, DB2 can derive extra predicates for the inner table at bind time and apply them to the sorted outer table to be used at run time. The predicates can reduce the size of the work file needed for the inner table. If DB2 has used an efficient index on the join columns, to retrieve the rows of the inner table, those rows are already in sequence. DB2 puts the data directly into the work file without sorting the inner table, which reduces the elapsed time.
821
OUTER A B Index 10 10 10 10 10 1 1 2 3 6
RIDs P5 P2 P7 P4 P1 P6 P3
5
Composite table A B 2 3 1 1 2 X 2 3 1 1 2 Y Jones Brown Davis Davis Jones
X=B
List prefetch
10 10 10 10 10
Method of joining
The method requires obtaining RIDs in the order needed to use list prefetch. The steps are shown in Figure 234. In that example, both the outer table (OUTER) and the inner table (INNER) have indexes on the join columns. DB2 performs the following steps: 1 2 Scans the outer table (OUTER). Joins the outer table with RIDs from the index on the inner table. The result is the phase 1 intermediate table. The index of the inner table is scanned for every row of the outer table. Sorts the data in the outer table and the RIDs, creating a sorted RID list
822
and the phase 2 intermediate table. The sort is indicated by a value of Y in column SORTN_JOIN of the plan table. If the index on the inner table is a well-clustered index, DB2 can skip this sort; the value in SORTN_JOIN is then N. 4 5 Retrieves the data from the inner table, using list prefetch. Concatenates the data from the inner table and the phase 2 intermediate table to create the final composite table.
PREFETCH=L
Performance considerations
Hybrid join uses list prefetch more efficiently than nested loop join, especially if there are indexes on the join predicate with low cluster ratios. It also processes duplicates more efficiently because the inner table is scanned only once for each set of duplicate values in the join column of the outer table. If the index on the inner table is highly clustered, there is no need to sort the intermediate table (SORTN_JOIN=N). The intermediate table is placed in a table in memory rather than in a work file.
823
Dimension table
Dimension table
Dimension table
Dimension table
Figure 235. Star schema with a fact table and dimension tables
| | | | | |
Unlike the steps in the other join methods (nested loop join, merge scan join, and hybrid join) in which only two tables are joined in each step, a step in the star join method can involve three or more tables. Dimension tables are joined to the fact table via a multi-column index that is defined on the fact table. Therefore, having a well-defined, multi-column index on the fact table is critical for efficient star join processing.
824
You can create even more complex star schemas by normalizing a dimension table into several tables. The normalized dimension table is called a snowflake. Only one of the tables in the snowflake joins directly wiht the fact table.
| | |
| |
You can set the subsystem parameter STARJOIN by using the STAR JOIN QUERIES field on the DSNTIP8 installation panel. v The number of tables in the star schema query block, including the fact table, dimensions tables, and snowflake tables, meet the requirements that are specified by the value of subsystem parameter SJTABLES. The value of SJTABLES is considered only if the subsystem parameter STARJOIN qualifies the query for star join. The values of SJTABLES are:
Chapter 27. Using EXPLAIN to improve SQL performance
825
1, 2, or 3 4 to 255
Star join is always considered. Star join is considered if the query block has at least the specified number of tables. If star join is enabled, 10 is the default value.
256 and greater Star join will never be considered. Star join, which can reduce bind time significantly, does not provide optimal performance in all cases. Performance of star join depends on a number of factors such as the available indexes on the fact table, the cluster ratio of the indexes, and the selectivity of rows through local and join predicates. Follow these general guidelines for setting the value of SJTABLES: If you have queries that reference less than 10 tables in a star schema database and you want to make the star join method applicable to all qualified queries, set the value of SJTABLES to the minimum number of tables used in queries that you want to be considered for star join. Example: Suppose that you query a star schema database that has one fact table and three dimension tables. You should set SJTABLES to 4. If you want to use star join for relatively large queries that reference a star schema database but are not necessarily suitable for star join, use the default. The star join method will be considered for all qualified queries that have 10 or more tables. If you have queries that reference a star schema database but, in general, do not want to use star join, consider setting SJTABLES to a higher number, such as 15, if you want to drastically cut the bind time for large queries and avoid a potential bind time SQL return code -101 for large qualified queries. For recommendations on indexes for star schemas, see Creating indexes for efficient star-join processing on page 782. Examples: query with three dimension tables: Suppose that you have a store in San Jose and want information about sales of audio equipment from that store in 2000. For this example, you want to join the following tables: v A fact table for SALES (S) v A dimension table for TIME (T) with columns for an ID, month, quarter, and year v A dimension table for geographic LOCATION (L) with columns for an ID, city, region, and country v A dimension table for PRODUCT (P) with columns for an ID, product item, class, and inventory You could write the following query to join the tables:
SELECT * FROM SALES S, TIME T, PRODUCT P, LOCATION L WHERE S.TIME = T.ID AND S.PRODUCT = P.ID AND S.LOCATION = L.ID AND T.YEAR = 2000 AND P.CLASS = SAN JOSE;
826
| | | | | | | |
Table 118. Plan table output for a star join example with TIME, PRODUCT, and LOCATION QUERYNO 1 1 1 1 QBLOCKNO 1 1 1 1 METHOD 0 1 1 1 TNAME TIME PRODUCT LOCATION SALES JOIN TYPE S S S S SORTN JOIN Y Y Y ACCESS TYPE R R R I
All snowflakes are processed before the central part of the star join, as individual query blocks, and are materialized into work files. There is a work file for each snowflake. The EXPLAIN output identifies these work files by naming them DSN_DIM_TBLX(nn), where nn indicates the corresponding QBLOCKNO for the snowflake. This next example shows the plan for a star join that contains two snowflakes. Suppose that two new tables MANUFACTURER (M) and COUNTRY (C) are added to the tables in the previous example to break dimension tables PRODUCT (P) and LOCATION (L) into snowflakes: v The PRODUCT table has a new column MID that represents the manufacturer. v Table MANUFACTURER (M) has columns for MID and name to contain manufacturer information. v The LOCATION table has a new column CID that represents the country. v Table COUNTRY (C) has columns for CID and name to contain country information. You could write the following query to join all the tables:
SELECT * FROM SALES S, TIME T, PRODUCT P, MANUFACTURER M, LOCATION L, COUNTRY C WHERE S.TIME = T.ID AND S.PRODUCT = P.ID AND P.MID = M.MID AND S.LOCATION = L.ID AND L.CID = C.CID AND T.YEAR = 2000 AND M.NAME = some_company;
The joined table pairs (PRODUCT, MANUFACTURER) and (LOCATION, COUNTRY) are snowflakes. The EXPLAIN output of this query looks like Table 119.
Table 119. Plan table output for a star join example with snowflakes QUERYNO 1 1 1 1 1 1 1 QBLOCKNO 1 1 1 1 2 2 3 METHOD 0 1 1 1 0 1 0 TNAME TIME DSN_DIM_TBLX(02) SALES DSN_DIM_TBLX(03) PRODUCT MANUFACTURER LOCATION JOIN TYPE S S S Y SORTN JOIN Y Y ACCESS TYPE R R I T R I R
827
Table 119. Plan table output for a star join example with snowflakes (continued) QUERYNO 1 QBLOCKNO 3 METHOD 4 TNAME COUNTRY JOIN TYPE SORTN JOIN ACCESS TYPE I
Note: This query consists of three query blocks: v QBLOCKNO=1: The main star join block v QBLOCKNO=2: A snowflake (PRODUCT, MANUFACTURER) that is materialized into work file DSN_DIM_TBLX(02) v QBLOCKNO=3: A snowflake (LOCATION, COUNTRY) that is materialized into work file DSN_DIM_TBLX(03)
The joins in the snowflakes are processed first, and each snowflake is materialized into a work file. Therefore, when the main star join block (QBLOCKNO=1) is processed, it contains four tables: SALES (the fact table), TIME (a base dimension table), and the two snowflake work files. In this example, in the main star join block, the star join method is used for the first three tables (as indicated by S in the JOIN TYPE column of the plan table) and the remaining work file is joined by the nested loop join with sparse index access on the work file (as indicated by T in the ACCESSTYPE column for DSN_DIM_TBLX(3)). | | | | | | | | | | | | | | | | | | | | | | | | | | | |
828
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SELECT C.COUNTRY, P.PRDNAME, SUM(F.SPRICE) FROM SALES F, TIME T, PROD P, LOC L, SCOUN C WHERE F.TID = T.TID AND F.PID = P.PID AND F.LID = L.LID AND L.CID = C.CID AND P.PCODE IN (4, 7, 21, 22, 53) GROUP BY .COUNTRY, P.PRDNAME;
For this query, two work files can be cached in memory. These work files, PROD and DSN_DIM_TBLX(02), are indicated by ACCESSTYPE=T. To determine the size of the dedicated virtual memory pool, perform the following steps: 1. Determine the value of A. Estimate the number of star join queries that run concurrently. In this example, based on the type of operation, up to 12 star join queries are expected run concurrently. Therefore, A = 12. 2. Determine the value of B. Estimate the average number of work files that a star join query uses. In this example, the star join query uses two work files, PROD and DSN_DIM_TBLX(02). Therefore B = 2. 3. Determine the value of C. Estimate the number of work-file rows, the maximum length of the key, and the total of the maximum length of the relevant columns. Multiply these three values together to find the size of the data caching space for the work file, or the value of C. Both PROD and DSN_DIM_TBLX(02) are used to determine the value of C. Recommendation: Average the values for a representative sample of work files, and round the value up to determine an estimate for a value of C. v The number of work-file rows depends on the number of rows that match the predicate. For PROD, 87 rows are stored in the work file because 87 rows match the IN-list predicate. No selective predicate is used for DSN_DIM_TBLX(02), so the entire result of the join is stored in the work file. The work file for DSN_DIM_TBLX(02) holds 2800 rows. v The maximum length of the key depends on the data type definition of the tables key column. For PID, the key column for PROD, the maximum length is 4. DSN_DIM_TBLX(02) is a work file that results from the join of LOC and SCOUN. The key column that is used in the join is LID from the LOC table. The maximum length of LID is 4.
829
| | | | | | | | | | | | | | | | | | | |
v The maximum data length depends on the maximum length of the key column and the maximum length of the column that is selected as part of the star join. Add to the maximum data length 1 byte for nullable columns, 2 bytes for varying length columns, and 3 bytes for nullable and varying length columns. For the PROD work file, the maximum data length is the maximum length of PID, which is 4, plus the maximum length of PRDNAME, which is 24. Therefore, the maximum data length for the PROD work file is 28. For the DSN_DIM_TBLX(02) workfile, the maximum data length is the maximum length of LID, which is 4, plus the maximum length of COUNTRY, which is 36. Therefore, the maximum data length for the DSN_DIM_TBLX(02) work file is 40. For PROD, C = (87) * (4 + 28) = 2784 bytes. For DSN_DIM_TBLX(02), C = (2800) * (4 + 40) = 123200 bytes. The average of these two estimated values for C is approximately 62 KB. Because the number of rows in each work file can vary depending on the selection criteria in the predicate, the value of C should be rounded up to the nearest multiple of 100 KB. Therefore C = 100 KB. 4. Multiply (A) * (B) * (C) to determine the size of the pool in MB. The size of the pool is determined by multiplying (12) * (2) * (100KB) = 2.4 MB.
830
Table 121. The number of pages read by prefetch, by buffer pool size (continued) Buffer pool size 8 KB Number of buffers <=112 buffers 113-499 buffers 500+ buffers 16 KB <=56 buffers 57-249 buffers 250+ buffers 32 KB <=16 buffers 17-99 buffers 100+ buffers Pages read by prefetch (for each asynchronous I/O) 4 pages 8 pages 16 pages 2 pages 4 pages 8 pages 0 pages (prefetch disabled) 2 pages 4 pages
For certain utilities (LOAD, REORG, RECOVER), the prefetch quantity can be twice as much. When sequential prefetch is used: Sequential prefetch is generally used for a table space scan. For an index scan that accesses eight or more consecutive data pages, DB2 requests sequential prefetch at bind time. The index must have a cluster ratio of 80% or higher. Both data pages and index pages are prefetched. | | | | | | | | | |
831
List prefetch does not preserve the data ordering given by the index. Because the RIDs are sorted in page number order before accessing the data, the data is not retrieved in order by any column. If the data must be ordered for an ORDER BY clause or any other reason, it requires an additional sort. In a hybrid join, if the index is highly clustered, the page numbers might not be sorted before accessing the data. List prefetch can be used with most matching predicates for an index scan. IN-list predicates are the exception; they cannot be the matching predicates when list prefetch is used.
| |
| |
832
833
B RUN2 26 16
C RUN3 42 32
For initial data access sequential, prefetch is requested starting at page A for P pages (RUN1 and RUN2). The prefetch quantity is always P pages. For subsequent page requests where the page is 1) page sequential and 2) data access sequential is still in effect, prefetch is requested as follows: v If the desired page is in RUN1, no prefetch is triggered because it was already triggered when data access sequential was first declared. v If the desired page is in RUN2, prefetch for RUN3 is triggered and RUN2 becomes RUN1, RUN3 becomes RUN2, and RUN3 becomes the page range starting at C+P for a length of P pages. If a data access pattern develops such that data access sequential is no longer in effect and, thereafter, a new pattern develops that is sequential, then initial data access sequential is declared again and handled accordingly. Because, at bind time, the number of pages to be accessed can only be estimated, sequential detection acts as a safety net and is employed when the data is being accessed sequentially. In extreme situations, when certain buffer pool thresholds are reached, sequential prefetch can be disabled. For a description of buffer pools and thresholds, see Part 5 (Volume 2) of DB2 Administration Guide.
Sorts of data
After you run EXPLAIN, DB2 sorts are indicated in PLAN_TABLE. The sorts can be either sorts of the composite table or the new table. If a single row of PLAN_TABLE has a Y in more than one of the sort composite columns, then one sort accomplishes two things. (DB2 will not perform two sorts when two Ys are in the same row.) For instance, if both SORTC_ORDERBY and SORTC_UNIQ are Y in one row of PLAN_TABLE, then a single sort puts the rows in order and removes any duplicate rows as well. The only reason DB2 sorts the new table is for join processing, which is indicated by SORTN_JOIN.
834
The performance of the sort by the GROUP BY clause is improved when the query accesses a single table and when the GROUP BY column has no index.
Sorts of RIDs
To perform list prefetch, DB2 sorts RIDs into ascending page number order. This sort is very fast and is done totally in memory. A RID sort is usually not indicated in the PLAN_TABLE, but a RID sort normally is performed whenever list prefetch is used. The only exception to this rule is when a hybrid join is performed and a single, highly clustered index is used on the inner table. In this case SORTN_JOIN is N, indicating that the RID list for the inner table was not sorted.
835
With parallelism: v At OPEN CURSOR, parallelism is asynchronously started, regardless of whether a sort is required. Control returns to the application immediately after the parallelism work is started. v If there is a RID sort, but no data sort, then parallelism is not started until the first fetch. This works the same way as with no parallelism.
Merge
The merge process is more efficient than materialization, as described in Performance of merge versus materialization on page 842. In the merge process, the statement that references the view or table expression is combined with the fullselect that defined the view or table expression. This combination creates a logically equivalent statement. This equivalent statement is executed against the database. Example: Consider the following statements, one of which defines a view, the other of which references the view:
View-defining statement: CREATE VIEW VIEW1 (VC1,VC21,VC32) AS SELECT C1,C2,C3 FROM T1 WHERE C1 > C3; View referencing statement: SELECT VC1,VC21 FROM VIEW1 WHERE VC1 IN (A,B,C);
The fullselect of the view-defining statement can be merged with the view-referencing statement to yield the following logically equivalent statement:
Merged statement: SELECT C1,C2 FROM T1 WHERE C1 > C3 AND C1 IN (A,B,C);
Example: The following statements show another example of when a view and table expression can be merged:
SELECT * FROM V1 X LEFT JOIN (SELECT * FROM T2) Y ON X.C1=Y.C1 LEFT JOIN T3 Z ON X.C1=Z.C1; Merged statement: SELECT * FROM V1 X LEFT JOIN T2 ON X.C1 = T2.C1 LEFT JOIN T3 Z ON X.C1 = Z.C1;
836
Materialization
Views and table expressions cannot always be merged. Example: Look at the following statements:
View defining statement: CREATE VIEW VIEW1 (VC1,VC2) AS SELECT SUM(C1),C2 FROM T1 GROUP BY C2; View referencing statement: SELECT MAX(VC1) FROM VIEW1;
Column VC1 occurs as the argument of a aggregate function in the view referencing statement. The values of VC1, as defined by the view-defining fullselect, are the result of applying the aggregate function SUM(C1) to groups after grouping the base table T1 by column C2. No equivalent single SQL SELECT statement can be executed against the base table T1 to achieve the intended result. There is no way to specify that aggregate functions should be applied successively.
GROUP BY X X
DISTINCT X X X
UNION X X X
837
Table 122. Cases when DB2 performs view or table expression materialization (continued). The X indicates a case of materialization. Notes follow the table. View definition or table expression uses...2 SELECT FROM view or table expression uses...1 Aggregate function Aggregate function DISTINCT SELECT subset of view or table expression columns Aggregate function X X Aggregate function DISTINCT X X UNION ALL(4) X
GROUP BY X X
DISTINCT X X X
UNION X X X
Notes to Table 122 on page 837: 1. If the view is referenced as the target of an INSERT, UPDATE, or DELETE, then view merge is used to satisfy the view reference. Only updatable views can be the target in these statements. See Chapter 5 of DB2 SQL Reference for information about which views are read-only (not updatable). An SQL statement can reference a particular view multiple times where some of the references can be merged and some must be materialized. 2. If a SELECT list contains a host variable in a table expression, then materialization occurs. For example:
SELECT C1 FROM (SELECT :HV1 AS C1 FROM T1) X;
If a view or nested table expression is defined to contain a user-defined function, and if that user-defined function is defined as NOT DETERMINISTIC or EXTERNAL ACTION, then the view or nested table expression is always materialized. 3. Additional details about materialization with outer joins: v If a WHERE clause exists in a view or table expression, and it does not contain a column, materialization occurs. Example:
SELECT X.C1 FROM (SELECT C1 FROM T1 WHERE 1=1) X LEFT JOIN T2 Y ON X.C1=Y.C1;
v If the outer join is a full outer join and the SELECT list of the view or nested table expression does not contain a standalone column for the column that is used in the outer join ON clause, then materialization occurs. Example:
SELECT X.C1 FROM (SELECT C1+10 AS C2 FROM T1) X FULL JOIN T2 Y ON X.C2=Y.C2;
v If there is no column in a SELECT list of a view or nested table expression, materialization occurs. Example:
SELECT X.C1 FROM (SELECT 1+2+:HV1. AS C1 FROM T1) X LEFT JOIN T2 Y ON X.C1=Y.C1;
838
# # # # # # # #
v If the SELECT list of a view or nested table expression contains a CASE expression, and the result of the CASE expression is referenced in the outer query block, then materialization occurs. Example:
SELECT X.C1 FROM T1 X LEFT JOIN (SELECT CASE C2 WHEN 5 THEN 10 ELSE 20 END AS YC1 FROM T2) Y ON X.C1 = Y.YC1;
4. DB2 cannot avoid materialization for UNION ALL in all cases. Some of the situations in which materialization occurs includes: v When the view is the operand in an outer join for which nulls are used for non-matching values, materialization occurs. This situation happens when the view is either operand in a full outer join, the right operand in a left outer join, or the left operand in a right outer join. v If the number of tables would exceed 225 after distribution, then distribution will not occur, and the result will be materialized.
Table 123 shows a subset of columns in a plan table for the query.
Table 123. Plan table output for an example with view materialization QBLOCKNO PLANNO 1 2 2 1 1 2 QBLOCK_ TYPE SELECT NOCOSUB NOCOSUB TNAME DEPT V1DIS TABLE_ TYPE T W ? METHOD 0 0 3
839
Table 123. Plan table output for an example with view materialization (continued) QBLOCKNO PLANNO 3 3 1 2 QBLOCK_ TYPE NOCOSUB NOCOSUB TNAME EMP TABLE_ TYPE T ? METHOD 0 3
Notice how TNAME contains the name of the view and TABLE_TYPE contains W to indicate that DB2 chooses materialization for the reference to the view because of the use of SELECT DISTINCT in the view definition. Example: Consider the following statements, which define a view and reference the view:
View defining statement: CREATE VIEW V1NODIS (SALARY, WORKDEPT) as (SELECT SALARY, WORKDEPT FROM DSN8810.EMP) View referencing statement: SELECT * FROM DSN8810.DEPT WHERE DEPTNO IN (SELECT WORKDEPT FROM V1NODIS)
If the VIEW was defined without DISTINCT, DB2 would choose merge instead of materialization. In the sample output, the name of the view does not appear in the plan table, but the table name on which the view is based does appear. Table 124 shows a sample plan table for the query.
Table 124. Plan table output for an example with view merge QBLOCKNO PLANNO 1 2 2 1 1 2 QBLOCK_ TYPE SELECT NOCOSUB NOCOSUB TNAME DEPT EMP TABLE_ TYPE T T ? METHOD 0 0 3
For an example of when a view definition contains a UNION ALL and DB2 can distribute joins and aggregations and avoid materialization, see Using EXPLAIN to determine UNION activity and query rewrite. When DB2 avoids materialization in such cases, TABLE_TYPE contains a Q to indicate that DB2 uses an intermediate result that is not materialized, and TNAME shows the name of this intermediate result as DSNWFQB(xx), where xx is the number of the query block that produced the result.
840
The QBLOCK_TYPE column in the plan table indicates union activity. For a UNION ALL, the column contains UNIONA. For UNION, the column contains UNION. When QBLOCK_TYPE=UNION, the METHOD column on the same row is set to 3 and the SORTC_UNIQ column is set to Y to indicate that a sort is necessary to remove duplicates. As with other views and table expressions, the plan table also shows when DB2 uses materialization instead of merge. Example: Consider the following statements, which define a view, reference the view, and show how DB2 rewrites the referencing statement:
View defining statement: View is created on three tables that contain weekly data CREATE VIEW V1 (CUSTNO, CHARGES, DATE) as SELECT CUSTNO, CHARGES, DATE FROM WEEK1 WHERE DATE BETWEEN 01/01/2000 And 01/07/2000 UNION ALL SELECT CUSTNO, CHARGES, DATE FROM WEEK2 WHERE DATE BETWEEN 01/08/2000 And 01/14/2000 UNION ALL SELECT CUSTNO, CHARGES, DATE FROM WEEK3 WHERE DATE BETWEEN 01/15/2000 And 01/21/2000; View referencing statement: For each customer in California, find the average charges during the first and third Friday of January 2000 SELECT V1.CUSTNO, AVG(V1.CHARGES) FROM CUST, V1 WHERE CUST.CUSTNO=V1.CUSTNO AND CUST.STATE=CA AND DATE IN (01/07/2000,01/21/2000) GROUP BY V1.CUSTNO; Rewritten statement (assuming that CHARGES is defined as NOT NULL): SELECT CUSTNO_U, SUM(SUM_U)/SUM(CNT_U) FROM ( SELECT WEEK1.CUSTNO, SUM(CHARGES), COUNT(CHARGES) FROM CUST, WEEK1 Where CUST.CUSTNO=WEEK1.CUSTNO AND CUST.STATE=CA AND DATE BETWEEN 01/01/2000 And 01/07/2000 AND DATE IN (01/07/2000,01/21/2000) GROUP BY WEEK1.CUSTNO UNION ALL SELECT WEEK3.CUSNTO, SUM(CHARGES), COUNT(CHARGES) FROM CUST,WEEK3 WHERE CUST.CUSTNO=WEEK3 AND CUST.STATE=CA AND DATE BETWEEN 01/15/2000 And 01/21/2000 AND DATE IN (01/07/2000,01/21/2000) GROUP BY WEEK3.CUSTNO ) AS X(CUSTNO_U,SUM_U,CNT_U) GROUP BY CUSNTO_U;
Table 125 shows a subset of columns in a plan table for the query.
Table 125. Plan table output for an example with a view with UNION ALLs QBLOCKNO 1 1 2 3 PLANNO 1 2 1 1 CUST TNAME DSNWFQB(02) TABLE_TYPE Q ? ? T METHOD 0 3 0 0 UNIONA QBLOCK_ TYPE PARENT_ QBLOCKNO 0 0 1 2
841
Table 125. Plan table output for an example with a view with UNION ALLs (continued) QBLOCKNO 3 4 4 PLANNO 2 1 2 TNAME WEEK1 CUST WEEK3 TABLE_TYPE T T T METHOD 1 0 2 QBLOCK_ TYPE PARENT_ QBLOCKNO 2 2 2
Notice how DB2 eliminates the second subselect of the view definition from the rewritten query and how the plan table indicates this removal by showing a UNION ALL for only the first and third subselect in the view definition. The Q in the TABLE_TYPE column indicates that DB2 does not materialize the view.
COL IN (list)
Note: Where op is =, <>, >, <, <=, or >=, and literal is either a host variable, constant, or special register. The literals in the BETWEEN predicate need not be identical.
Implied predicates generated through predicate transitive closure are also considered for first step evaluation.
842
adequate information to make an estimate. That estimate is not likely to be 100% accurate, but is likely to be more accurate than any estimate that is in cost category B. DB2 puts estimates into cost category B when it is forced to use default values for its estimates, such as when no statistics are available, or because host variables are used in a query. See the description of the REASON column in Table 127 on page 844 for more information about how DB2 determines into which cost category an estimate goes. v Give a system programmer a basis for entering service-unit values by which to govern dynamic statements. Information about using predictive governing is in Part 5 (Volume 2) of DB2 Administration Guide. This section describes the following tasks to obtain and use cost estimate information from EXPLAIN: 1. 2. 3. 4. Creating a statement table Populating and maintaining a statement table on page 845 Retrieving rows from a statement table on page 845 The implications of cost categories on page 846
For more information about how to change applications to handle the SQLCODES that are associated with predictive governing, see Writing an application to handle predictive governing on page 602.
| |
CREATE TABLE DSN_STATEMNT_TABLE ( QUERYNO INTEGER APPLNAME CHAR(8) PROGNAME VARCHAR(128) COLLID VARCHAR(128) GROUP_MEMBER CHAR(8) EXPLAIN_TIME TIMESTAMP STMT_TYPE CHAR(6) COST_CATEGORY CHAR(1) PROCMS INTEGER PROCSU INTEGER REASON VARCHAR(254) STMT_ENCODE CHAR(1)
NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT
NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL
WITH WITH WITH WITH WITH WITH WITH WITH WITH WITH WITH WITH
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT DEFAULT);
| | | | |
Your statement table can use an older format in which the STMT_ENCODE column does not exist, PROGNAME has a data type of CHAR(8), and COLLID has a data type of CHAR(18). However, use the most current format because it gives you the most information. You can alter a statement table in the older format to a statement table in the current format. Table 127 on page 844 shows the content of each column.
843
Table 127. Descriptions of columns in DSN_STATEMNT_TABLE Column Name QUERYNO Description A number that identifies the statement being explained. See the description of the QUERYNO column in Table 101 on page 794 for more information. If QUERYNO is not unique, the value of EXPLAIN_TIME is unique. The name of the application plan for the row, or blank. See the description of the APPLNAME column in Table 101 on page 794 for more information. The name of the program or package containing the statement being explained, or blank. See the description of the PROGNAME column in Table 101 on page 794 for more information. The collection ID for the package. Applies only to an embedded EXPLAIN statement executed from a package or to a statement that is explained when binding a package. Blank if not applicable. The value DSNDYNAMICSQLCACHE indicates that the row is for a cached statement. The member name of the DB2 that executed EXPLAIN, or blank. See the description of the GROUP_MEMBER column in Table 101 on page 794 for more information. The time at which the statement is processed. This time is the same as the BIND_TIME column in PLAN_TABLE. The type of statement being explained. Possible values are: SELECT INSERT UPDATE DELETE SELUPD DELCUR UPDCUR COST_CATEGORY SELECT INSERT UPDATE DELETE SELECT with FOR UPDATE OF DELETE WHERE CURRENT OF CURSOR UPDATE WHERE CURRENT OF CURSOR
APPLNAME PROGNAME
| COLLID | | |
GROUP_MEMBER EXPLAIN_TIME STMT_TYPE
Indicates if DB2 was forced to use default values when making its estimates. Possible values: A B Indicates that DB2 had enough information to make a cost estimate without using default values. Indicates that some condition exists for which DB2 was forced to use default values. See the values in REASON to determine why DB2 was unable to put this estimate in cost category A.
PROCMS
The estimated processor cost, in milliseconds, for the SQL statement. The estimate is rounded up to the next integer value. The maximum value for this cost is 2147483647 milliseconds, which is equivalent to approximately 24.8 days. If the estimated value exceeds this maximum, the maximum value is reported. The estimated processor cost, in service units, for the SQL statement. The estimate is rounded up to the next integer value. The maximum value for this cost is 2147483647 service units. If the estimated value exceeds this maximum, the maximum value is reported.
PROCSU
844
Table 127. Descriptions of columns in DSN_STATEMNT_TABLE (continued) Column Name REASON Description A string that indicates the reasons for putting an estimate into cost category B. HAVING CLAUSE HOST VARIABLES A subselect in the SQL statement contains a HAVING clause. The statement uses host variables, parameter markers, or special registers.
REFERENTIAL CONSTRAINTS Referential constraints of the type CASCADE or SET NULL exist on the target table of a DELETE statement. TABLE CARDINALITY The cardinality statistics are missing for one or more of the tables that are used in the statement. Or, the statement required the materialization of views or nested table expressions. Triggers are defined on the target table of an INSERT, UPDATE, or DELETE statement. The statement uses user-defined functions.
| |
TRIGGERS UDF
| | | | | |
STMT_ENCODE
Encoding scheme of the statement. If the statement repesents a single CCSID set, the possible values are: A ASCII E EBCDIC U Unicode If the statement has multiple CCSID sets, the value is M.
The QUERYNO, APPLNAME, PROGNAME, COLLID, and EXPLAIN_TIME columns contain the same values as corresponding columns of PLAN_TABLE for a given plan. You can use these columns to join the plan table and statement table:
SELECT A.*, PROCMS, COST_CATEGORY FROM JOE.PLAN_TABLE A, JOE.DSN_STATEMNT_TABLE B WHERE A.APPLNAME = APPL1 AND A.APPLNAME = B.APPLNAME AND
Chapter 27. Using EXPLAIN to improve SQL performance
845
A.QUERYNO = B.QUERYNO AND A.PROGNAME = B.PROGNAME AND A.COLLID = B.COLLID AND A.BIND_TIME = B.EXPLAIN_TIME ORDER BY A.QUERYNO, A.QBLOCKNO, A.PLANNO, A.MIXOPSEQ;
846
847
P2R1
P2R1 P2R2
P2R2 P2R3
P2R3
P3R1
P3R1 P3R2
Figure 239 shows parallel I/O operations. With parallel I/O, DB2 prefetches data from the 3 partitions at one time. The processor processes the first request from each partition, then the second request from each partition, and so on. The processor is not waiting for I/O, but there is still only one processing task.
CP processing: P1R1 P2R1 P3R1 P1R2 P2R2 P3R2 P1R3 I/O: P1 P2 P3 Time line R1 R1 R1 R2 R2 R2 R3 R3 R3
Figure 240 on page 849 shows parallel CP processing. With CP parallelism, DB2 can use multiple parallel tasks to process the query. Three tasks working concurrently can greatly reduce the overall elapsed time for data-intensive and processor-intensive queries. The same principle applies for Sysplex query parallelism, except that the work can cross the boundaries of a single CPC.
848
CP task 1: P1R1 I/O: P1R1 CP task 2: P2R1 I/O: P2R1 CP task 3: I/O: P3R1 Time line P3R2 P3R3 P2R2 P2R3 P2R2 P2R3 P1R2 P1R2 P1R3 P1R3
P3R3
P3R1
P3R2
Figure 240. CP and I/O processing techniques. Query processing using CP parallelism. The tasks can be contained within a single CPC or can be spread out among the members of a data sharing group.
Queries that are most likely to take advantage of parallel operations: Queries that can take advantage of parallel processing are: v Those in which DB2 spends most of the time fetching pagesan I/O-intensive query A typical I/O-intensive query is something like the following query, assuming that a table space scan is used on many pages:
SELECT COUNT(*) FROM ACCOUNTS WHERE BALANCE > 0 AND DAYS_OVERDUE > 30;
v Those in which DB2 spends a lot of processor time and also, perhaps, I/O time, to process rows. Those include: Queries with intensive data scans and high selectivity. Those queries involve large volumes of data to be scanned but relatively few rows that meet the search criteria. Queries containing aggregate functions. Column functions (such as MIN, MAX, SUM, AVG, and COUNT) usually involve large amounts of data to be scanned but return only a single aggregate result. Queries accessing long data rows. Those queries access tables with long data rows, and the ratio of rows per page is very low (one row per page, for example). Queries requiring large amounts of central processor time. Those queries might be read-only queries that are complex, data-intensive, or that involve a sort. A typical processor-intensive query is something like:
SELECT MAX(QTY_ON_HAND) AS MAX_ON_HAND, AVG(PRICE) AS AVG_PRICE, AVG(DISCOUNTED_PRICE) AS DISC_PRICE, SUM(TAX) AS SUM_TAX, SUM(QTY_SOLD) AS SUM_QTY_SOLD, SUM(QTY_ON_HAND - QTY_BROKEN) AS QTY_GOOD, AVG(DISCOUNT) AS AVG_DISCOUNT, ORDERSTATUS, COUNT(*) AS COUNT_ORDERS FROM ORDER_TABLE
849
WHERE SHIPPER = OVERNIGHT AND SHIP_DATE < DATE(1996-01-01) GROUP BY ORDERSTATUS ORDER BY ORDERSTATUS;
Terminology: When the term task is used with information about parallel processing, the context should be considered. For parallel query CP processing or Sysplex query parallelism, a task is an actual z/OS execution unit used to process a query. For parallel I/O processing, a task simply refers to the processing of one of the concurrent I/O streams. A parallel group is the term used to name a particular set of parallel operations (parallel tasks or parallel I/O operations). A query can have more than one parallel group, but each parallel group within the query is identified by its own unique ID number. The degree of parallelism is the number of parallel tasks or I/O operations that DB2 determines can be used for the operations on the parallel group. The maximum of parallel operations that DB2 can generate is 254. However, for most queries and DB2 environments, DB2 chooses a lower number. You might need to limit the maximum number further because more parallel operations consume processor, real storage, and I/O resources. If resource consumption in high in your parallelism environment, use the MAX DEGREE field on installation panel DSNTIP4 to explicitly limit the maximum number of parallel operations that DB2 generates, as explain in Enabling parallel processing.
| | | | | | | |
You can also change the special register default from 1 to ANY for the entire DB2 subsystem by modifying the CURRENT DEGREE field on installation panel DSNTIP4. v If you bind with isolation CS, choose also the option CURRENTDATA(NO), if possible. This option can improve performance in general, but it also ensures that DB2 will consider parallelism for ambiguous cursors. If you bind with CURRENDATA(YES) and DB2 cannot tell if the cursor is read-only, DB2 does not consider parallelism. When a cursor is read-only, it is recommended that you specify the FOR FETCH ONLY or FOR READ ONLY clause on the DECLARE CURSOR statement to explicitly indicate that the cursor is read-only. v The virtual buffer pool parallel sequential threshold (VPPSEQT) value must be large enough to provide adequate buffer pool space for parallel processing. For a description of buffer pools and thresholds, see Part 5 (Volume 2) of DB2 Administration Guide. If you enable parallel processing when DB2 estimates a given querys I/O and central processor cost is high, multiple parallel tasks can be activated if DB2 estimates that elapsed time can be reduced by doing so.
850
| |
Recommendation: For parallel sorts, allocate sufficient work files to maintain performance. Special requirements for CP parallelism: DB2 must be running on a central processor complex that contains two or more tightly coupled processors (sometimes called central processors, or CPs). If only one CP is online when the query is bound, DB2 considers only parallel I/O operations. DB2 also considers only parallel I/O operations if you declare a cursor WITH HOLD and bind with isolation RR or RS. For more restrictions on parallelism, see Table 128. For complex queries, run the query in parallel within a member of a data sharing group. With Sysplex query parallelism, use the power of the data sharing group to process individual complex queries on many members of the data sharing group. For more information about how you can use the power of the data sharing group to run complex queries, see Chapter 6 of DB2 Data Sharing: Planning and Administration. Limiting the degree of parallelism: If you want to limit the maximum number of parallel tasks that DB2 generates, you can use the MAX DEGREE field on installation panel DSNTIP4. Changing MAX DEGREE, however, is not the way to turn parallelism off. You use the DEGREE bind parameter or CURRENT DEGREE special register to turn parallelism off.
Yes Yes No No
Yes Yes No No
| |
Merge scan join on more than one column Queries that qualify for direct row access Materialized views or materialized nested table expressions at reference time EXISTS within WHERE predicate
No Yes
No Yes
No No
| |
851
DB2 avoids certain hybrid joins when parallelism is enabled: To ensure that you can take advantage of parallelism, DB2 does not pick one type of hybrid join (SORTN_JOIN=Y) when the plan or package is bound with CURRENT DEGREE=ANY or if the CURRENT DEGREE special register is set to ANY.
852
Table 129. Part of PLAN_TABLE for single table access ACCESS_ DEGREE 3 ACCESS_ PGROUP_ ID 1 JOIN_ DEGREE (null) JOIN_ PGROUP_ ID (null) SORTC_ PGROUP_ ID (null) SORTN_ PGROUP_ ID (null)
TNAME T1
METHOD 0
| | |
v Example 2: nested loop join Consider a query that results in a series of nested loop joins for three tables, T1, T2 and T3. T1 is the outermost table, and T3 is the innermost table. DB2 decides at bind time to initiate three concurrent requests to retrieve data from each of the three tables. Each request accesses part of T1 and all of T2 and T3. For the nested loop join method with sort, all the retrievals are in the same parallel group except for star join with ACCESSTYPE=T (sparse index). Part of PLAN_TABLE appears as shown in Table 130:
Table 130. Part of PLAN_TABLE for a nested loop join ACCESS_ DEGREE 3 3 3 ACCESS_ PGROUP_ ID 1 1 1 JOIN_ DEGREE (null) 3 3 JOIN_ PGROUP_ ID (null) 1 1 SORTC_ PGROUP_ ID (null) (null) (null) SORTN_ PGROUP_ ID (null) (null) (null)
TNAME T1 T2 T3
METHOD 0 1 1
v Example 3: merge scan join Consider a query that causes a merge scan join between two tables, T1 and T2. DB2 decides at bind time to initiate three concurrent requests for T1 and six concurrent requests for T2. The scan and sort of T1 occurs in one parallel group. The scan and sort of T2 occurs in another parallel group. Furthermore, the merging phase can potentially be done in parallel. Here, a third parallel group is used to initiate three concurrent requests on each intermediate sorted table. Part of PLAN_TABLE appears as shown in Table 131:
Table 131. Part of PLAN_TABLE for a merge scan join ACCESS_ METHOD DEGREE 0 2 3 6 ACCESS_ PGROUP_ ID d 2 JOIN_ DEGREE (null) 3 JOIN_ PGROUP_ ID (null) 3 SORTC_ PGROUP_ ID d d SORTN_ PGROUP_ ID (null) d
TNAME T1 T2
| | | | | | | | |
In a multi-table join, DB2 might also execute the sort for a composite that involves more than one table in a parallel task. DB2 uses a cost basis model to determine whether to use parallel sort in all cases. When DB2 decides to use parallel sort, SORTC_PGROUP_ID and SORTN_PGROUP_ID indicate the parallel group identifier. Consider a query that joins three tables, T1, T2, and T3, and uses a merge scan join between T1 and T2, and then between the composite and T3. If DB2 decides, based on the cost model, that all sorts in this query are to be performed in parallel, part of PLAN_TABLE appears as shown in Table 132 on page 854:
853
| | | | | | | | |
Table 132. Part of PLAN_TABLE for a multi-table, merge scan join ACCESS_ METHOD DEGREE 0 2 2 3 6 6 ACCESS_ PGROUP_ ID 1 2 4 JOIN_ DEGREE (null) 6 6 JOIN_ PGROUP_ ID (null) 3 5 SORTC_ PGROUP_ ID (null) 1 3 SORTN_ PGROUP_ ID (null) 2 4
TNAME T1 T2 T3
v Example 4: hybrid join Consider a query that results in a hybrid join between two tables, T1 and T2. Furthermore, T1 needs to be sorted; as a result, in PLAN_TABLE the T2 row has SORTC_JOIN=Y. DB2 decides at bind time to initiate three concurrent requests for T1 and six concurrent requests for T2. Parallel operations are used for a join through a clustered index of T2. Because T2s RIDs can be retrieved by initiating concurrent requests on the clustered index, the joining phase is a parallel step. The retrieval of T2s RIDs and T2s rows are in the same parallel group. Part of PLAN_TABLE appears as shown in Table 133:
Table 133. Part of PLAN_TABLE for a hybrid join ACCESS_ DEGREE 3 6 ACCESS_ PGROUP_ ID 1 2 JOIN_ DEGREE (null) 6 JOIN_ PGROUP_ ID (null) 2 SORTC_ PGROUP_ ID (null) 1 SORTN_ PGROUP_ ID (null) (null)
TNAME T1 T2
METHOD 0 4
854
Locking considerations for repeatable read applications: For CP parallelism, locks are obtained independently by each task. Be aware that this situation can possibly increase the total number of locks taken for applications that: v Use an isolation level of repeatable read v Use CP parallelism v Repeatedly access the table space using a lock mode of IS without issuing COMMITs Recommendation: As is recommended for all repeatable-read applications, issue frequent COMMITs to release the lock resources that are held. Repeatable read or read stability isolation cannot be used with Sysplex query parallelism.
The default value for CURRENT DEGREE is 1 unless your installation has changed the default for the CURRENT DEGREE special register. You can use system controls to disable parallelism, as well. These are described in Part 5 (Volume 2) of DB2 Administration Guide.
855
856
857
TSO or ISPF ATTACH DSN initialization load module Alias=DSN ATTACH DSN main load module ATTACH Application command processor (See Note 1)
Figure 241. DSN task structure
LINK
Notes to Figure 241: 1. The RUN command with the CP option causes DSN to attach your program and create a new TCB. 2. The RUN command without the CP option causes DSN to link to your program.
The application has one large load module and one plan. Disadvantages: For large programs of this type, you want a more modular design, making the plan more flexible and easier to maintain. If you have one large plan, you must rebind the entire plan whenever you change a module that includes SQL statements. 1 You cannot pass control to another load module that makes SQL calls by using ISPLINK; rather, you must use LINK, XCTL, or LOAD and BALR.
1. To achieve a more modular construction when all parts of the program use SQL, consider using packages. See Chapter 17, Planning for DB2 program preparation, on page 381.
858
If you want to use ISPLINK, then call ISPF to run under DSN:
DSN RUN PROGRAM(ISPF) PLAN(MYPLAN) END
You then need to leave ISPF before you can start your application. Furthermore, the entire program is dependent on DB2; if DB2 is not running, no part of the program can begin or continue to run.
For a part that accesses DB2, the command can name a CLIST that starts DSN:
DSN RUN PROGRAM(PART1) PLAN(PLAN1) PARM(input from panel) END
Breaking the application into separate modules makes it more flexible and easier to maintain. Furthermore, some of the application might be independent of DB2; portions of the application that do not call DB2 can run, even if DB2 is not running. A stopped DB2 database does not interfere with parts of the program that refer only to other databases. Disadvantages: The modular application, on the whole, has to do more work. It calls several CLISTs, and each one must be located, loaded, parsed, interpreted, and executed. It also makes and breaks connections to DB2 more often than the single load module. As a result, you might lose some efficiency.
859
With the same modular structure as in the previous example, using CAF is likely to provide greater efficiency by reducing the number of CLISTs. This does not mean, however, that any DB2 function executes more quickly. Disadvantages: Compared to the modular structure using DSN, the structure using CAF is likely to require a more complex program, which in turn might require assembler language subroutines. For more information, see Chapter 30, Programming for the call attachment facility, on page 861.
860
CAF capabilities
A program using CAF can: v Access DB2 from z/OS address spaces where TSO, IMS, or CICS do not exist. v Access DB2 from multiple z/OS tasks in an address space. v Access the DB2 IFI. v Run when DB2 is down (though it cannot run SQL when DB2 is down). v Run with or without the TSO terminal monitor program (TMP). v Run without being a subtask of the DSN command processor (or of any DB2 code). v Run above or below the 16-MB line. (The CAF code resides below the line.) v Establish an explicit connection to DB2, through a CALL interface, with control over the exact state of the connection. v Establish an implicit connection to DB2, by using SQL statements or IFI calls without first calling CAF, with a default plan name and subsystem identifier. v Verify that your application is using the correct release of DB2.
Copyright IBM Corp. 1983, 2008
861
v Supply event control blocks (ECBs), for DB2 to post, that signal startup or termination. v Intercept return codes, reason codes, and abend codes from DB2 and translate them into messages as desired.
Task capabilities
Any task in an address space can establish a connection to DB2 through CAF. There can be only one connection for each task control block (TCB). A DB2 service request issued by a program running under a given task is associated with that tasks connection to DB2. The service request operates independently of any DB2 activity under any other task. Each connected task can run a plan. Multiple tasks in a single address space can specify the same plan, but each instance of a plan runs independently from the others. A task can terminate its plan and run a different plan without fully breaking its connection to DB2. CAF does not generate task structures, nor does it provide attention processing exits or functional recovery routines. You can provide whatever attention handling and functional recovery your application needs, but you must use ESTAE/ESTAI type recovery routines and not Enabled Unlocked Task (EUT) FRR routines. Using multiple simultaneous connections can increase the possibility of deadlocks and DB2 resource contention. Your application design must consider that possibility.
Programming language
You can write CAF applications in assembler language, C, COBOL, Fortran, and PL/I. When choosing a language to code your application in, consider these restrictions: v If you need to use z/OS macros (ATTACH, WAIT, POST, and so on), you must choose a programming language that supports them or else embed them in modules written in assembler language. v The CAF TRANSLATE function is not available from Fortran. To use the function, code it in a routine written in another language, and then call that routine from Fortran. You can find a sample assembler program (DSN8CA) and a sample COBOL program (DSN8CC) that use the call attachment facility in library prefix.SDSNSAMP. A PL/I application (DSN8SPM) calls DSN8CA, and a COBOL application (DSN8SCM) calls DSN8CC. For more information about the sample applications and on accessing the source code, see Appendix B, Sample applications, on page 1013.
Tracing facility
A tracing facility provides diagnostic messages that aid in debugging programs and diagnosing errors in the CAF code. In particular, attempts to use CAF incorrectly cause error messages in the trace stream.
Program preparation
Preparing your application program to run in CAF is similar to preparing it to run in other environments, such as CICS, IMS, and TSO. You can prepare a CAF application either in the batch environment or by using the DB2 program preparation process. You can use the program preparation system either through
862
DB2I or through the DSNH CLIST. For examples and guidance in program preparation, see Chapter 21, Preparing an application program to run, on page 471.
CAF requirements
When you write programs that use CAF, be aware of the following characteristics.
Program size
The CAF code requires about 16 KB of virtual storage per address space and an additional 10 KB for each TCB using CAF.
Use of LOAD
CAF uses z/OS SVC LOAD to load two modules as part of the initialization following your first service request. Both modules are loaded into fetch-protected storage that has the job-step protection key. If your local environment intercepts and replaces the LOAD SVC, you must ensure that your version of LOAD manages the load list element (LLE) and contents directory entry (CDE) chains like the standard z/OS LOAD macro.
Run environment
Applications requesting DB2 services must adhere to several run environment characteristics. Those characteristics must be in effect regardless of the attachment facility you use. They are not unique to CAF. v The application must be running in TCB mode. SRB mode is not supported. v An application task cannot have any EUT FRRs active when requesting DB2 services. If an EUT FRR is active, the DB2 functional recovery can fail, and your application can receive some unpredictable abends. v Different attachment facilities cannot be active concurrently within the same address space. Therefore: An application must not use CAF in an CICS or IMS address space. An application that runs in an address space that has a CAF connection to DB2 cannot connect to DB2 using RRSAF. An application that runs in an address space that has an RRSAF connection to DB2 cannot connect to DB2 using CAF. An application cannot invoke the z/OS AXSET macro after executing the CAF CONNECT call and before executing the CAF DISCONNECT call. v One attachment facility cannot start another. This means that your CAF application cannot use DSN, and a DSN RUN subcommand cannot call your CAF application. v The language interface module for CAF, DSNALI, is shipped with the linkage attributes AMODE(31) and RMODE(ANY). If your applications load CAF below the 16-MB line, you must link-edit DSNALI again.
863
be the same as the member name of the database request module (DBRM) that DB2 produced when you precompiled the source program that contains the first SQL call. You must also substitute the DSNALI language interface module for the TSO language interface module, DSNELI. Running DSN applications with CAF is not advantageous, and the loss of DSN services can affect how well your program runs. In general, running DSN applications with CAF is not recommended unless you provide an application controller to manage the DSN application and replace any needed DSN functions. Even then, you could have to change the application to communicate connection failures to the controller correctly.
864
Application LOAD DSNALI LOAD DSNHLI2 LOAD DSNWLI2 CALL DSNALI (CONNECT) (OPEN) (CLOSE) (DISCONNECT) CALL DSNWLI (IFI calls) CALL DSNHLI (SQL calls)
Load
Call DSNALI
DSNHLI (dummy application entry point) CALL DSNHLI2 (Transfer calls to real CAF SQL entry point) DSNWLI (dummy application entry point) CALL DSNWLI2 (Transfer calls to real CAF IFI) DSNWLI DSNHLI2 (Process SQL stmts)
DB2
The remainder of this chapter discusses: v Summary of connection functions on page 866 v Sample scenarios on page 882 v Exit routines from your application on page 883
Chapter 30. Programming for the call attachment facility
865
v Error messages and dsntrace on page 884 v Program examples for CAF on page 885.
Implicit connections
If you do not explicitly specify executable SQL statements in a CALL DSNALI statement of your CAF application, CAF initiates implicit CONNECT and OPEN requests to DB2. Although CAF performs these connection requests using the following default values, the requests are subject to the same DB2 return codes and reason codes as explicitly specified requests. Implicit connections use the following defaults: Subsystem name The default name specified in the module DSNHDECP. CAF uses the installation default DSNHDECP, unless your own DSNHDECP is in a library in a STEPLIB of JOBLIB concatenation, or in the link list. In a data sharing group, the default subsystem name is the group attachment name. Plan name The member name of the database request module (DBRM) that DB2 produced when you precompiled the source program that contains the first SQL call. If your program can make its first SQL call from different modules with different DBRMs, you cannot use a default plan name; you must use an explicit call using the OPEN function. If your application includes both SQL and IFI calls, you must issue at least one SQL call before you issue any IFI calls. This ensures that your application uses the correct plan.
866
Different types of implicit connections exist. The simplest is for application to run neither CONNECT nor OPEN. You can also use CONNECT only or OPEN only. Each of these implicitly connects your application to DB2. To terminate an implicit connection, you must use the proper calls. See Table 141 on page 882 for details. Your application program must successfully connect, either implicitly or explicitly, to DB2 before it can execute any SQL calls to the CAF DSNHLI entry point. Therefore, the application program must first determine the success or failure of all implicit connection requests. For implicit connection requests, register 15 contains the return code, and register 0 contains the reason code. The return code and reason code are also in the message text for SQLCODE -991. The application program should examine the return and reason codes immediately after the first executable SQL statement within the application program. Two ways to do this are to: v Examine registers 0 and 15 directly. v Examine the SQLCA, and if the SQLCODE is -991, obtain the return and reason code from the message text. The return code is the first token, and the reason code is the second token. If the implicit connection was successful, the application can examine the SQLCODE for the first, and subsequent, SQL statements.
867
If you do not specify the precompiler option ATTACH, the DB2 precompiler generates calls to entry point DSNHLI for each SQL request. The precompiler does not know and is independent of the different DB2 attachment facilities. When the calls generated by the DB2 precompiler pass control to DSNHLI, your code corresponding to the dummy entry point must preserve the option list passed in R1 and call DSNHLI2 specifying the same option list. For a coding example of a dummy DSNHLI entry point, see Using dummy entry point DSNHLI for CAF on page 890.
Link-editing DSNALI
You can include the CAF language interface module DSNALI in your load module during a link-edit step. The module must be in a load module library, which is included either in the SYSLIB concatenation or another INCLUDE library defined in the linkage editor JCL. Because all language interface modules contain an entry point declaration for DSNHLI, the linkage editor JCL must contain an INCLUDE linkage editor control statement for DSNALI; for example, INCLUDE DB2LIB(DSNALI). By coding these options, you avoid inadvertently picking up the wrong language interface module. If you do not need explicit calls to DSNALI for CAF functions, including DSNALI in your load module has some advantages. When you include DSNALI during the link-edit, you need not code the previously described dummy DSNHLI entry point in your program or specify the precompiler option ATTACH. Module DSNALI contains an entry point for DSNHLI, which is identical to DSNHLI2, and an entry point DSNWLI, which is identical to DSNWLI2. A disadvantage to link-editing DSNALI into your load module is that any IBM maintenance to DSNALI requires a new link-edit of your load module.
Task termination
If a connected task terminates normally before the CLOSE function deallocates the plan, DB2 commits any database changes that the thread made since the last
868
commit point. If a connected task abends before the CLOSE function deallocates the plan, DB2 rolls back any database changes since the last commit point. In either case, DB2 deallocates the plan, if necessary, and terminates the tasks connection before it allows the task to terminate.
DB2 abend
If DB2 abends while an application is running, the application is rolled back to the last commit point. If DB2 terminates while processing a commit request, DB2 either commits or rolls back any changes at the next restart. The action taken depends on the state of the commit request when DB2 terminates.
Register conventions
If you do not specify the return code and reason code parameters in your CAF calls, CAF puts a return code in register 15 and a reason code in register 0. CAF also supports high-level languages that cannot interrogate individual registers. See Figure 243 on page 870 and the discussion following it for more information. The contents of registers 2 through 14 are preserved across calls. You must conform to the standard calling conventions listed in Table 134:
Table 134. Standard usage of registers R1 and R13-R15 Register R1 R13 R14 R15 Usage Parameter list pointer (for details, see Call DSNALI parameter list) Address of callers save area Callers return address CAF entry point address
869
coding a CONNECT call in a COBOL program. You want to specify all parameters except the return code parameter. Write the call in this way:
CALL DSNALI USING FUNCTN SSID TECB SECB RIBPTR BY CONTENT ZERO BY REFERENCE REASCODE SRDURA EIBPTR.
For an assembler language call, code a comma for a parameter in the CALL DSNALI statement when you want to use the default value for that parameter but specify subsequent parameters. For example, code a CONNECT call like this to specify all optional parameters except the return code parameter:
CALL DSNALI,(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,,REASCODE,SRDURA,EIBPTR,GROUPOVERRIDE)
Figure 243 illustrates how you can use the indicator end of parameter list to control the return codes and reason code fields following a CAF CONNECT call. Each of the six illustrated termination points applies to all CAF parameter lists: 1. Terminates the parameter list without specifying the parameters retcode, reascode, and srdura, and places the return code in register 15 and the reason code in register 0. Terminating at this point ensures compatibility with CAF programs that require a return code in register 15 and a reason code in register 0. 2. Terminates the parameter list after the parameter retcode, and places the return code in the parameter list and the reason code in register 0. Terminating at this point permits the application program to take action, based on the return code, without further examination of the associated reason code.
870
3. Terminates the parameter list after the parameter reascode, and places the return code and the reason code in the parameter list. Terminating at this point provides support to high-level languages that are unable to examine the contents of individual registers. If you code your CAF application in assembler language, you can specify the reason code parameter and omit the return code parameter. To do this, specify a comma as a place-holder for the omitted return code parameter. 4. Terminates the parameter list after the parameter srdura. If you code your CAF application in assembler language, you can specify this parameter and omit the retcode and reascode parameters. To do this, specify commas as place-holders for the omitted parameters. 5. Terminates the parameter list after the parameter eibptr. If you code your CAF application in assembler language, you can specify this parameter and omit the retcode, reascode, or srdura parameters. To do this, specify commas as place-holders for the omitted parameters. 6. Terminates the parameter list after the parameter groupoverride. If you code your CAF application in assembler language, you can specify this parameter and omit the retcode, reascode, srdura, or eibptr parameters. To do this, specify commas as place-holders for the omitted parameters. Even if you specify that the return code be placed in the parameter list, it is also placed in register 15 to accommodate high-level languages that support special return code processing.
Parameters point to the following areas: function A 12-byte area containing CONNECT followed by five blanks. ssnm A 4-byte DB2 subsystem name or group attachment name (if used in a data sharing group) to which the connection is made. If you specify the group attachment name, the program connects to the DB2 on the z/OS system on which the program is running. When you specify a group
871
attachment name and a startup ECB, DB2 ignores the startup ECB. If you need to use a startup ECB, specify a subsystem name, rather than a group attachment name. That subsystem name must be different from the group attachment name. If your ssnm is less than four characters long, pad it on the right with blanks to a length of four characters. termecb The applications event control block (ECB) for DB2 termination. DB2 posts this ECB when the operator enters the STOP DB2 command or when DB2 is abnormally terminating. It indicates the type of termination by a POST code, as shown in Table 135:
Table 135. POST codes and related termination types POST code 8 12 16 Termination type QUIESCE FORCE ABTERM
Before you check termecb in your CAF application program, first check the return code and reason code from the CONNECT call to ensure that the call completed successfully. See Checking return codes and reason codes for CAF on page 888 for more information. startecb The applications startup ECB. If DB2 has not yet started when the application issues the call, DB2 posts the ECB when it successfully completes its startup processing. DB2 posts at most one startup ECB per address space. The ECB is the one associated with the most recent CONNECT call from that address space. Your application program must examine any nonzero CAF/DB2 reason codes before issuing a WAIT on this ECB. If ssnm is a group attachment name, the first DB2 subsystem that starts on the local z/OS system and matches the specified group attachment name posts the ECB. ribptr A 4-byte area in which CAF places the address of the release information block (RIB) after the call. You can determine what release level of DB2 you are currently running by examining field RIBREL. You can determine the modification level within the release level by examining fields RIBCNUMB and RIBCINFO. If the value in RIBCNUMB is greater than zero, check RIBCINFO for modification levels. If the RIB is not available (for example, if you name a subsystem that does not exist), DB2 sets the 4-byte area to zeros. The area to which ribptr points is below the 16-MB line. Your program does not have to use the release information block, but it cannot omit the ribptr parameter. Macro DSNDRIB maps the release information block (RIB). It can be found in prefix.SDSNMACS(DSNDRIB). retcode A 4-byte area in which CAF places the return code.
872
This field is optional. If not specified, CAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which CAF places a reason code. If not specified, CAF places the reason code in register 0. This field is optional. If specified, you must also specify retcode. srdura A 10-byte area containing the string SRDURA(CD). This field is optional. If it is provided, the value in the CURRENT DEGREE special register stays in effect from CONNECT until DISCONNECT. If it is not provided, the value in the CURRENT DEGREE special register stays in effect from OPEN until CLOSE. If you specify this parameter in any language except assembler, you must also specify the return code and reason code parameters. In assembler language, you can omit the return code and reason code parameters by specifying commas as place-holders. eibptr A 4-byte area in which CAF puts the address of the environment information block (EIB). The EIB contains information that you can use if you are connecting to a DB2 subsystem that is part of a data sharing group. For example, you can determine the name of the data sharing group, the member to which you are connecting, and whether the subsystem is in new-function mode. If the DB2 subsystem that you connect to is not part of a data sharing group, then the fields in the EIB that are related to data sharing are blank. If the EIB is not available (for example, if you name a subsystem that does not exist), DB2 sets the 4-byte area to zeros. The area to which eibptr points is below the 16-MB line. You can omit this parameter when you make a CONNECT call. If you specify this parameter in any language except assembler, you must also specify the return code, reason code, and srdura parameters. In assembler language, you can omit the return code, reason code, and srdura parameters by specifying commas as place-holders. Macro DSNDEIB maps the EIB. It can be found in prefix.SDSNMACS(DSNDEIB). groupoverride An 8-byte area that the application provides. This field is optional. If this field is provided, it contains the string NOGROUP. This string indicates that the subsystem name that is specified by ssnm is to be used as a DB2 subsystem name, even if ssnm matches a group attachment name. If groupoverride is not provided, ssnm is used as the group attachment name if it matches a group attachment name. If you specify this parameter in any language except assembler, you must also specify the return code, reason code, srdura, and eibptr parameters. In assembler language, you can omit the return code, reason code, srdura, and eibptr parameters by specifying commas as place-holders. Usage: CONNECT establishes the callers task as a user of DB2 services. If no other task in the address space currently holds a connection with the subsystem named by ssnm, CONNECT also initializes the address space for communication to the DB2 address spaces. CONNECT establishes the address spaces cross memory authorization to DB2 and builds address space control blocks.
# # # #
873
In a data sharing environment, use the groupoverride parameter on a CONNECT call when you want to connect to a specific member of a data sharing group, and the subsystem name of that member is the same as the group attachment name. In general, using the groupoverride parameter is not desirable because it limits the ability to do dynamic workload routing in a Parallel Sysplex. Using a CONNECT call is optional. The first request from a task, either OPEN, or an SQL or IFI call, causes CAF to issue an implicit CONNECT request. If a task is connected implicitly, the connection to DB2 is terminated either when you execute CLOSE or when the task terminates. Establishing task and address space level connections is essentially an initialization function and involves significant overhead. If you use CONNECT to establish a task connection explicitly, it terminates when you use DISCONNECT or when the task terminates. The explicit connection minimizes the overhead by ensuring that the connection to DB2 remains after CLOSE deallocates a plan. You can run CONNECT from any or all tasks in the address space, but the address space level is initialized only once when the first task connects. If a task does not issue an explicit CONNECT or OPEN, the implicit connection from the first SQL or IFI call specifies a default DB2 subsystem name. A systems programmer or administrator determines the default subsystem name when installing DB2. Be certain that you know what the default name is and that it names the specific DB2 subsystem you want to use. Practically speaking, you must not mix explicit CONNECT and OPEN requests with implicitly established connections in the same address space. Either explicitly specify which DB2 subsystem you want to use or allow all requests to use the default subsystem. Use CONNECT when: v You need to specify a particular (non-default) subsystem name (ssnm). v You need the value of the CURRENT DEGREE special register to last as long as the connection (srdura). v You need to monitor the DB2 startup ECB (startecb), the DB2 termination ECB (termecb), or the DB2 release level. v Multiple tasks in the address space will be opening and closing plans. v A single task in the address space will be opening and closing plans more than once. The other parameters of CONNECT enable the caller to learn: v That the operator has issued a STOP DB2 command. When this happens, DB2 posts the termination ECB, termecb. Your application can either wait on or just look at the ECB. v That DB2 is abnormally terminating. When this happens, DB2 posts the termination ECB, termecb. v That DB2 is available again (after a connection attempt that failed because DB2 was down). Wait on or look at the startup ECB, startecb. DB2 ignores this ECB if it was active at the time of the CONNECT request, or if the CONNECT request was to a group attachment name. v The current release level of DB2. Access the RIBREL field in the release information block (RIB).
874
Do not issue CONNECT requests from a TCB that already has an active DB2 connection. (See Summary of CAF behavior on page 881 and Error messages and dsntrace on page 884 for more information about CAF errors.) Table 136 shows a CONNECT call in each language.
Table 136. Examples of CAF CONNECT calls Language Assembler Call example CALL DSNALI,(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,RETCODE,REASCODE,SRDURA, EIBPTR, GRPOVER) fnret=dsnali(&functn[0],&ssid[0], &tecb, &secb,&ribptr,&retcode, &reascode, &srdura[0], &eibptr, &grpover[0]); CALL DSNALI USING FUNCTN SSID TERMECB STARTECB RIBPTR RETCODE REASCODE SRDURA EIBPTR GRPOVER. CALL DSNALI(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,RETCODE,REASCODE,SRDURA, EIBPTR,GRPOVER) CALL DSNALI(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,RETCODE,REASCODE,SRDURA, EIBPTR,GRPOVER);
C COBOL Fortran
PL/I
Note: DSNALI is an assembler language program; therefore, the following compiler directives must be included in your C and PL/I applications: C C++ extern "OS" { int DSNALI( char * functn, ...); } PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE); #pragma linkage(dsnali, OS)
Parameters point to the following areas: function A 12-byte area containing the word OPEN followed by eight blanks.
875
ssnm A 4-byte DB2 subsystem name or group attachment name (if used in a data sharing group). Optionally, OPEN establishes a connection from ssnm to the named DB2 subsystem. If your ssnm is less than four characters long, pad it on the right with blanks to a length of four characters. plan An 8-byte DB2 plan name. retcode A 4-byte area in which CAF places the return code. This field is optional. If not specified, CAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which CAF places a reason code. If not specified, CAF places the reason code in register 0. This field is optional. If specified, you must also specify retcode. groupoverride An 8-byte area that the application provides. This field is optional. If this field is provided, it contains the string NOGROUP. This string indicates that the subsystem name that is specified by ssnm is to be used as a DB2 subsystem name, even if ssnm matches a group attachment name. If groupoverride is not provided, ssnm is used as the group attachment name if it matches a group attachment name. If you specify this parameter in any language except assembler, you must also specify the return code and reason code parameters. In assembler language, you can omit the return code and reason code parameters by specifying commas as place-holders. Usage: OPEN allocates DB2 resources needed to run the plan or issue IFI requests. If the requesting task does not already have a connection to the named DB2 subsystem, then OPEN establishes it. OPEN allocates the plan to the DB2 subsystem named in ssnm. The ssnm parameter, like the others, is required, even if the task issues a CONNECT call. If a task issues CONNECT followed by OPEN, then the subsystem names for both calls must be the same. In a data sharing environment, use the groupoverride parameter on an OPEN call when you want to connect to a specific member of a data sharing group, and the subsystem name of that member is the same as the group attachment name. In general, using the groupoverride parameter is not desirable because it limits the ability to do dynamic workload routing in a Parallel Sysplex. The use of OPEN is optional. If you do not use OPEN, the action of OPEN occurs on the first SQL or IFI call from the task, using the defaults listed under Implicit connections on page 866. Do not use OPEN if the task already has a plan allocated. Table 137 shows an OPEN call in each language.
Table 137. Examples of CAF OPEN calls Language Assembler Call example CALL DSNALI,(FUNCTN,SSID,PLANNAME, RETCODE,REASCODE,GRPOVER)
876
Table 137. Examples of CAF OPEN calls (continued) Language C COBOL Fortran PL/I Call example fnret=dsnali(&functn[0],&ssid[0], &planname[0],&retcode, &reascode,&grpover[0]); CALL DSNALI USING FUNCTN SSID PLANNAME RETCODE REASCODE GRPOVER. CALL DSNALI(FUNCTN,SSID,PLANNAME, RETCODE,REASCODE,GRPOVER) CALL DSNALI(FUNCTN,SSID,PLANNAME, RETCODE,REASCODE,GRPOVER);
Note: DSNALI is an assembler language program; therefore, the following compiler directives must be included in your C and PL/I applications: C C++ extern "OS" { int DSNALI( char * functn, ...); } PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE); #pragma linkage(dsnali, OS)
Parameters point to the following areas: function A 12-byte area containing the word CLOSE followed by seven blanks. termop A 4-byte terminate option, with one of these values: SYNC ABRT Commit any modified data Roll back data to the previous commit point.
retcode A 4-byte area in which CAF should place the return code. This field is optional. If not specified, CAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which CAF places a reason code. If not specified, CAF places the reason code in register 0. This field is optional. If specified, you must also specify retcode. Usage: CLOSE deallocates the created plan either explicitly using OPEN or implicitly at the first SQL call.
Chapter 30. Programming for the call attachment facility
877
If you did not issue a CONNECT for the task, CLOSE also deletes the tasks connection to DB2. If no other task in the address space has an active connection to DB2, DB2 also deletes the control block structures created for the address space and removes the cross memory authorization. Do not use CLOSE when your current task does not have a plan allocated. Using CLOSE is optional. If you omit it, DB2 performs the same actions when your task terminates, using the SYNC parameter if termination is normal and the ABRT parameter if termination is abnormal. (The function is an implicit CLOSE.) If the objective is to shut down your application, you can improve shut down performance by using CLOSE explicitly before the task terminates. If you want to use a new plan, you must issue an explicit CLOSE, followed by an OPEN, specifying the new plan name. If DB2 terminates, a task that did not issue CONNECT should explicitly issue CLOSE, so that CAF can reset its control blocks to allow for future connections. This CLOSE returns the reset accomplished return code (+004) and reason code X'00C10824'. If you omit CLOSE, then when DB2 is back on line, the tasks next connection request fails. You get either the message YOUR TCB DOES NOT HAVE A CONNECTION, with X'00F30018' in register 0, or CAF error message DSNA201I or DSNA202I, depending on what your application tried to do. The task must then issue CLOSE before it can reconnect to DB2. A task that issued CONNECT explicitly should issue DISCONNECT to cause CAF to reset its control blocks when DB2 terminates. In this case, CLOSE is not necessary. Table 138 shows a CLOSE call in each language.
Table 138. Examples of CAF CLOSE calls Language Assembler C COBOL Fortran PL/I Call example CALL DSNALI,(FUNCTN,TERMOP,RETCODE, REASCODE) fnret=dsnali(&functn[0], &termop[0], &retcode,&reascode); CALL DSNALI USING FUNCTN TERMOP RETCODE REASCODE. CALL DSNALI(FUNCTN,TERMOP, RETCODE,REASCODE) CALL DSNALI(FUNCTN,TERMOP, RETCODE,REASCODE);
Note: DSNALI is an assembler language program; therefore, the following compiler directives must be included in your C and PL/I applications: C C++ extern "OS" { int DSNALI( char * functn, ...); } PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE); #pragma linkage(dsnali, OS)
878
DSNALI DISCONNECT function shows the syntax for the DISCONNECT function.
The single parameter points to the following area: function A 12-byte area containing the word DISCONNECT followed by two blanks. retcode A 4-byte area in which CAF places the return code. This field is optional. If not specified, CAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which CAF places a reason code. If not specified, CAF places the reason code in register 0. This field is optional. If specified, you must also specify retcode. Usage: DISCONNECT removes the calling tasks connection to DB2. If no other task in the address space has an active connection to DB2, DB2 also deletes the control block structures created for the address space and removes the cross memory authorization. Only those tasks that issued CONNECT explicitly can issue DISCONNECT. If CONNECT was not used, then DISCONNECT causes an error. If an OPEN is in effect when the DISCONNECT is issued (that is, a plan is allocated), CAF issues an implicit CLOSE with the SYNC parameter. Using DISCONNECT is optional. Without it, DB2 performs the same functions when the task terminates. (The function is an implicit DISCONNECT.) If the objective is to shut down your application, you can improve shut down performance if you request DISCONNECT explicitly before the task terminates. If DB2 terminates, a task that issued CONNECT must issue DISCONNECT to reset the CAF control blocks. The function returns the reset accomplished return codes and reason codes (+004 and X'00C10824'), and ensures that future connection requests from the task work when DB2 is back on line. A task that did not issue CONNECT explicitly must issue CLOSE to reset the CAF control blocks when DB2 terminates. Table 139 shows a DISCONNECT call in each language.
Table 139. Examples of CAF DISCONNECT calls Language Assembler Call example CALL DSNALI(,FUNCTN,RETCODE,REASCODE)
879
Table 139. Examples of CAF DISCONNECT calls (continued) Language C COBOL Fortran PL/I Call example fnret=dsnali(&functn[0], &retcode, &reascode); CALL DSNALI USING FUNCTN RETCODE REASCODE. CALL DSNALI(FUNCTN,RETCODE,REASCODE) CALL DSNALI(FUNCTN,RETCODE,REASCODE);
Note: DSNALI is an assembler language program; therefore, the following compiler directives must be included in your C and PL/I applications: C C++ extern "OS" { int DSNALI( char * functn, ...); } PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE); #pragma linkage(dsnali, OS)
Parameters point to the following areas: function A 12-byte area containing the word TRANSLATE followed by three blanks. sqlca The programs SQL communication area (SQLCA). retcode A 4-byte area in which CAF places the return code. This field is optional. If not specified, CAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which CAF places a reason code. If not specified, CAF places the reason code in register 0.
880
This field is optional. If specified, you must also specify retcode. Usage: Use TRANSLATE to get a corresponding SQL error code and message text for the DB2 error reason codes that CAF returns in register 0 following an OPEN service request. DB2 places the information into the SQLCODE and SQLSTATE host variables or related fields of the SQLCA. The TRANSLATE function can translate those codes beginning with X'00F3', but it does not translate CAF reason codes beginning with X'00C1'. If you receive error reason code X'00F30040' (resource unavailable) after an OPEN request, TRANSLATE returns the name of the unavailable database object in the last 44 characters of field SQLERRM. If the DB2 TRANSLATE function does not recognize the error reason code, it returns SQLCODE -924 (SQLSTATE 58006) and places a printable copy of the original DB2 function code and the return and error reason codes in the SQLERRM field. The contents of registers 0 and 15 do not change, unless TRANSLATE fails; in which case, register 0 is set to X'C10205' and register 15 to 200. Table 140 shows a TRANSLATE call in each language.
Table 140. Examples of CAF TRANSLATE calls Language Assembler C COBOL PL/I Call example CALL DSNALI,(FUNCTN,SQLCA,RETCODE, REASCODE) fnret=dsnali(&functn[0], &sqlca, &retcode, &reascode); CALL DSNALI USING FUNCTN SQLCA RETCODE REASCODE. CALL DSNALI(FUNCTN,SQLCA,RETCODE, REASCODE);
Note: DSNALI is an assembler language program; therefore, the following compiler directives must be included in your C and PL/I applications: C C++ extern "OS" { int DSNALI( char * functn, ...); } PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE); #pragma linkage(dsnali, OS)
881
Table 141. Effects of CAF calls, as dependent on connection history Previous function Empty: first call Next function CONNECT CONNECT OPEN OPEN SQL CONNECT, OPEN, followed by the SQL or IFI call OPEN, followed by the SQL or IFI call CLOSE Error 203 DISCONNECT Error 204 TRANSLATE Error 205
CONNECT
Error 201
OPEN
Error 203
DISCONNECT
TRANSLATE
CONNECT followed by OPEN CONNECT followed by SQL or IFI call OPEN SQL or IFI call
Error 201
Error 202
The SQL or IFI CLOSE1 call The SQL or IFI CLOSE1 call The SQL or IFI CLOSE2 call The SQL or IFI CLOSE2 call
DISCONNECT
TRANSLATE
Error 201
Error 202
DISCONNECT
TRANSLATE
TRANSLATE TRANSLATE3
Notes: 1. The task and address space connections remain active. If CLOSE fails because DB2 was down, then the CAF control blocks are reset, the function produces return code 4 and reason code XX'00C10824', and CAF is ready for more connection requests when DB2 is again on line. 2. A TRANSLATE request is accepted, but in this case it is redundant. CAF automatically issues a TRANSLATE request when an SQL or IFI request fails.
Table 141 uses the following conventions: v The top row lists the possible CAF functions that programs can use as their call. v The first column lists the tasks most recent history of connection requests. For example, CONNECT followed by OPEN means that the task issued CONNECT and then OPEN with no other CAF calls in between. v The intersection of a row and column shows the effect of the next call if it follows the corresponding connection history. For example, if the call is OPEN and the connection history is CONNECT, the effect is OPEN: the OPEN function is performed. If the call is SQL and the connection history is empty (meaning that the SQL call is the first CAF function the program), the effect is that an implicit CONNECT and OPEN function is performed, followed by the SQL function.
Sample scenarios
This section shows sample scenarios for connecting tasks to DB2.
882
When the task terminates: v Any database changes are committed (if termination was normal) or rolled back (if termination was abnormal). v The active plan and all database resources are deallocated. v The task and address space connections to DB2 are terminated.
A task can have a connection to one and only one DB2 subsystem at any point in time. A CAF error occurs if the subsystem name on OPEN does not match the one on CONNECT. To switch to a different subsystem, the application must disconnect from the current subsystem, then issue a connect request specifying a new subsystem name.
Several tasks
In this scenario, multiple tasks within the address space are using DB2 services. Each task must explicitly specify the same subsystem name on either the CONNECT or OPEN function request. Task 1 makes no SQL or IFI calls. Its purpose is to monitor the DB2 termination and startup ECBs, and to check the DB2 release level.
TASK 1 CONNECT OPEN SQL ... CLOSE OPEN SQL ... CLOSE DISCONNECT OPEN SQL ... CLOSE OPEN SQL ... CLOSE OPEN SQL ... CLOSE OPEN SQL ... CLOSE TASK 2 TASK 3 TASK n
883
The call attachment facility has no attention exit routines. You can provide your own if necessary. However, DB2 uses enabled unlocked task (EUT) functional recovery routines (FRRs), so if you request attention while DB2 code is running, your routine may not get control.
Recovery routines
The call attachment facility has no abend recovery routines. Your program can provide an abend exit routine. It must use tracking indicators to determine if an abend occurred during DB2 processing. If an abend occurs while DB2 has control, you have these choices: v Allow task termination to complete. Do not retry the program. DB2 detects task termination and terminates the thread with the ABRT parameter. You lose all database changes back to the last SYNC or COMMIT point. This is the only action that you can take for abends that CANCEL or DETACH cause. You cannot use additional SQL statements at this point. If you attempt to execute another SQL statement from the application program or its recovery routine, a return code of +256 and a reason code of X00F30083 occurs. v In an ESTAE routine, issue CLOSE with the ABRT parameter followed by DISCONNECT. The ESTAE exit routine can retry so that you do not need to reinstate the application task. Standard z/OS functional recovery routines (FRRs) can cover only code running in service request block (SRB) mode. Because DB2 does not support calls from SRB mode routines, you can use only enabled unlocked task (EUT) FRRs in your routines that call DB2. Do not have an EUT FRR active when using CAF, processing SQL requests, or calling IFI. An EUT FRR can be active, but it cannot retry failing DB2 requests. An EUT FRR retry bypasses DB2s ESTAE routines. The next DB2 request of any type, including DISCONNECT, fails with a return code of +256 and a reason code of X'00F30050'. With z/OS, if you have an active EUT FRR, all DB2 requests fail, including the initial CONNECT or OPEN. The requests fail because DB2 always creates an ARR-type ESTAE, and z/OS does not allow the creation of ARR-type ESTAEs when an FRR is active.
884
When the reason code begins with X'00F3' (except for X'00F30006'), you can use the CAF TRANSLATE function to obtain error message text that can be printed and displayed. These reason codes are issued by the subsystem support for allied memories, a part of the DB2 subsystem support subcomponent that services all DB2 connection and work requests. For more information about the codes, along with abend and subsystem termination reason codes issued by other parts of subsystem support, see Part 3 of DB2 Codes. For SQL calls, CAF returns standard SQLCODEs in the SQLCA. See Part 2 of DB2 Codes for a list of those return codes and their meanings. CAF returns IFI return codes and reason codes in the instrumentation facility communication area (IFCA). Table 142 shows the CAF return codes and reason codes.
Table 142. CAF return codes and reason codes Return code 0 4 Reason code X'00000000' X'00C10824' X'00C10831' X'00C10201' X'00C10202' X'00C10203' X'00C10204' X'00C10205' X'00C10206' X'00C10207' X'00C10208'
2
Explanation Successful completion. CAF reset complete. Ready to make a new connection. Release level mismatch between DB2 and the and the call attachment facility code. Received a second CONNECT from the same TCB. The first CONNECT could have been implicit or explicit. Received a second OPEN from the same TCB. The first OPEN could have been implicit or explicit. CLOSE issued when there was no active OPEN. DISCONNECT issued when there was no active CONNECT, or the AXSET macro was issued between CONNECT and DISCONNECT. TRANSLATE issued when there was no connection to DB2. Wrong number of parameters or the end-of-list bit was off. Unrecognized function parameter. Received requests to access two different DB2 subsystems from the same TCB. CAF system error. Probable error in the attach or DB2.
# 8 #
2001 2001 2001 200
1
1. A CAF error probably caused by errors in the parameter lists coming from application programs. CAF errors do not change the current state of your connection to DB2; you can continue processing with a corrected request. 2. System errors cause abends. For an explanation of the abend reason codes, see Part 3 of DB2 Codes. If tracing is on, a descriptive message is written to the DSNTRACE data set just before the abend.
Subsystem support subcomponent codes (X00F3): These reason codes are issued by the subsystem support for allied memories, a part of the DB2 subsystem support subcomponent that services all DB2 connection and work requests. For more information about the codes, along with abend and subsystem termination reason codes that are issued by other parts of subsystem support, see Part 3 of DB2 Codes.
885
886
****************************** CONNECT ******************************** L R15,LIALI Get the Language Interface address MVC FUNCTN,CONNECT Get the function to call CALL (15),(FUNCTN,SSID,TECB,SECB,RIBPTR),VL,MF=(E,CAFCALL) BAL R14,CHEKCODE Check the return and reason codes CLC CONTROL,CONTINUE Is everything still OK BNE EXIT If CONTROL not CONTINUE, stop loop USING R8,RIB Prepare to access the RIB L R8,RIBPTR Access RIB to get DB2 release level WRITE The current DB2 release level is RIBREL ****************************** OPEN *********************************** L R15,LIALI Get the Language Interface address MVC FUNCTN,OPEN Get the function to call CALL (15),(FUNCTN,SSID,PLAN),VL,MF=(E,CAFCALL) BAL R14,CHEKCODE Check the return and reason codes ****************************** SQL ************************************ * Insert your SQL calls here. The DB2 Precompiler * generates calls to entry point DSNHLI. You should * specify the precompiler option ATTACH(CAF), or code * a dummy entry point named DSNHLI to intercept * all SQL calls. A dummy DSNHLI is shown below. ****************************** CLOSE ********************************** CLC CONTROL,CONTINUE Is everything still OK? BNE EXIT If CONTROL not CONTINUE, shut down MVC TRMOP,ABRT Assume termination with ABRT parameter L R4,SQLCODE Put the SQLCODE into a register C R4,CODE0 Examine the SQLCODE BZ SYNCTERM If zero, then CLOSE with SYNC parameter C R4,CODE100 See if SQLCODE was 100 BNE DISC If not 100, CLOSE with ABRT parameter SYNCTERM MVC TRMOP,SYNC Good code, terminate with SYNC parameter DISC DS 0H Now build the CAF parmlist L R15,LIALI Get the Language Interface address MVC FUNCTN,CLOSE Get the function to call CALL (15),(FUNCTN,TRMOP),VL,MF=(E,CAFCALL) BAL R14,CHEKCODE Check the return and reason codes ****************************** DISCONNECT ***************************** CLC CONTROL,CONTINUE Is everything still OK BNE EXIT If CONTROL not CONTINUE, stop loop L R15,LIALI Get the Language Interface address MVC FUNCTN,DISCON Get the function to call CALL (15),(FUNCTN),VL,MF=(E,CAFCALL) BAL R14,CHEKCODE Check the return and reason codes
The code does not show a task that waits on the DB2 termination ECB. If you like, you can code such a task and use the z/OS WAIT macro to monitor the ECB. You probably want this task to detach the sample code if the termination ECB is posted. That task can also wait on the DB2 startup ECB. This sample waits on the startup ECB at its own task level. On entry, the code assumes that certain variables are already set: Variable LIALI LISQL SSID TECB SECB RIBPTR PLAN Usage The entry point that handles DB2 connection service requests. The entry point that handles SQL calls. The DB2 subsystem identifier. The address of the DB2 termination ECB. The address of the DB2 startup ECB. A fullword that CAF sets to contain the RIB address. The plan name to use on the OPEN call.
Chapter 30. Programming for the call attachment facility
887
CONTROL CAFCALL
Used to shut down processing because of unsatisfactory return or reason codes. Subroutine CHEKCODE sets CONTROL. List-form parameter area for the CALL macro.
Figure 245. Subroutine to check return codes from CAF and DB2, in assembler (Part 1 of 3)
888
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
*********************************************************************** * Subroutine CHEKCODE checks return codes from DB2 and Call Attach. * When CHEKCODE receives control, R13 should point to the callers * save area. *********************************************************************** CHEKCODE DS 0H STM R14,R12,12(R13) Prolog ST R15,RETCODE Save the return code ST R0,REASCODE Save the reason code LA R15,SAVEAREA Get save area address ST R13,4(,R15) Chain the save areas ST R15,8(,R13) Chain the save areas LR R13,R15 Put save area address in R13 * ********************* HUNT FOR FORCE OR ABTERM *************** TM TECB,POSTBIT See if TECB was POSTed BZ DOCHECKS Branch if TECB was not POSTed CLC TECBCODE(3),QUIESCE Is this "STOP DB2 MODE=FORCE" BE DOCHECKS If not QUIESCE, was FORCE or ABTERM MVC CONTROL,SHUTDOWN Shutdown WRITE Found found FORCE or ABTERM, shutting down B ENDCCODE Go to the end of CHEKCODE DOCHECKS DS 0H Examine RETCODE and REASCODE * ********************* HUNT FOR 0 ***************************** CLC RETCODE,ZERO Was it a zero? BE ENDCCODE Nothing to do in CHEKCODE for zero * ********************* HUNT FOR 4 ***************************** CLC RETCODE,FOUR Was it a 4? BNE HUNT8 If not a 4, hunt eights CLC REASCODE,C10831 Was it a release level mismatch? BNE HUNT824 Branch if not an 831 WRITE Found a mismatch between DB2 and CAF release levels B ENDCCODE We are done. Go to end of CHEKCODE HUNT824 DS 0H Now look for CAF reset reason code CLC REASCODE,C10824 Was it 4? Are we ready to restart? BNE UNRECOG If not 824, got unknown code WRITE CAF is now ready for more input MVC CONTROL,RESTART Indicate that we should re-CONNECT B ENDCCODE We are done. Go to end of CHEKCODE UNRECOG DS 0H WRITE Got RETCODE = 4 and an unrecognized reason code MVC CONTROL,SHUTDOWN Shutdown, serious problem B ENDCCODE We are done. Go to end of CHEKCODE * ********************* HUNT FOR 8 ***************************** HUNT8 DS 0H CLC RETCODE,EIGHT Hunt return code of 8 BE GOT8OR12 CLC RETCODE,TWELVE Hunt return code of 12 BNE HUNT200 GOT8OR12 DS 0H Found return code of 8 or 12 WRITE Found RETCODE of 8 or 12 CLC REASCODE,F30002 Hunt for X00F30002 BE DB2DOWN
Figure 245. Subroutine to check return codes from CAF and DB2, in assembler (Part 2 of 3)
889
A4TRANS * * * * DB2DOWN
* HUNT200
* HUNT204
* WASSAT
ENDCCODE
BYEBYE
CLC REASCODE,F30012 Hunt for X00F30012 BE DB2DOWN WRITE DB2 connection failure with an unrecognized REASCODE CLC SQLCODE,ZERO See if we need TRANSLATE BNE A4TRANS If not blank, skip TRANSLATE ********************* TRANSLATE unrecognized RETCODEs ******** WRITE SQLCODE 0 but R15 not, so TRANSLATE to get SQLCODE L R15,LIALI Get the Language Interface address CALL (15),(TRANSLAT,SQLCA),VL,MF=(E,CAFCALL) C R0,C10205 Did the TRANSLATE work? BNE A4TRANS If not C10205, SQLERRM now filled in WRITE Not able to TRANSLATE the connection failure B ENDCCODE Go to end of CHEKCODE DS 0H SQLERRM must be filled in to get here Note: your code should probably remove the XFF separators and format the SQLERRM feedback area. Alternatively, use DB2 Sample Application DSNTIAR to format a message. WRITE SQLERRM is: SQLERRM B ENDCCODE We are done. Go to end of CHEKCODE DS 0H Hunt return code of 200 WRITE DB2 is down and I will tell you when it comes up WAIT ECB=SECB Wait for DB2 to come up WRITE DB2 is now available MVC CONTROL,RESTART Indicate that we should re-CONNECT B ENDCCODE ********************* HUNT FOR 200 *************************** DS 0H Hunt return code of 200 CLC RETCODE,NUM200 Hunt 200 BNE HUNT204 WRITE CAF found user error, see DSNTRACE data set B ENDCCODE We are done. Go to end of CHEKCODE ********************* HUNT FOR 204 *************************** DS 0H Hunt return code of 204 CLC RETCODE,NUM204 Hunt 204 BNE WASSAT If not 204, got strange code WRITE CAF found system error, see DSNTRACE data set B ENDCCODE We are done. Go to end of CHEKCODE ********************* UNRECOGNIZED RETCODE ******************* DS 0H WRITE Got an unrecognized RETCODE MVC CONTROL,SHUTDOWN Shutdown BE ENDCCODE We are done. Go to end of CHEKCODE DS 0H Should we shut down? L R4,RETCODE Get a copy of the RETCODE C R4,FOUR Have a look at the RETCODE BNH BYEBYE If RETCODE <= 4 then leave CHEKCODE MVC CONTROL,SHUTDOWN Shutdown DS 0H Wrap up and leave CHEKCODE L R13,4(,R13) Point to callers save area RETURN (14,12) Return to the caller
Figure 245. Subroutine to check return codes from CAF and DB2, in assembler (Part 3 of 3)
890
In the example that follows, LISQL is addressable because the calling CSECT used the same register 12 as CSECT DSNHLI. Your application must also establish addressability to LISQL.
*********************************************************************** * Subroutine DSNHLI intercepts calls to LI EP=DSNHLI *********************************************************************** DS 0D DSNHLI CSECT Begin CSECT STM R14,R12,12(R13) Prologue LA R15,SAVEHLI Get save area address ST R13,4(,R15) Chain the save areas ST R15,8(,R13) Chain the save areas LR R13,R15 Put save area address in R13 L R15,LISQL Get the address of real DSNHLI BASSM R14,R15 Branch to DSNALI to do an SQL call * DSNALI is in 31-bit mode, so use * BASSM to assure that the addressing * mode is preserved. L R13,4(,R13) Restore R13 (callers save area addr) L R14,12(,R13) Restore R14 (return address) RETURN (1,12) Restore R1-12, NOT R0 and R15 (codes)
891
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
****************************** VARIABLES ****************************** SECB DS F DB2 Startup ECB TECB DS F DB2 Termination ECB LIALI DS F DSNALI Entry Point address LISQL DS F DSNHLI2 Entry Point address SSID DS CL4 DB2 Subsystem ID. CONNECT parameter PLAN DS CL8 DB2 Plan name. OPEN parameter TRMOP DS CL4 CLOSE termination option (SYNC|ABRT) FUNCTN DS CL12 CAF function to be called RIBPTR DS F DB2 puts Release Info Block addr here RETCODE DS F Chekcode saves R15 here REASCODE DS F Chekcode saves R0 here CONTROL DS CL8 GO, SHUTDOWN, or RESTART SAVEAREA DS 18F Save area for CHEKCODE ****************************** CONSTANTS ****************************** SHUTDOWN DC CL8SHUTDOWN CONTROL value: Shutdown execution RESTART DC CL8RESTART CONTROL value: Restart execution CONTINUE DC CL8CONTINUE CONTROL value: Everything OK, cont CODE0 DC F0 SQLCODE of 0 CODE100 DC F100 SQLCODE of 100 QUIESCE DC XL3000008 TECB postcode: STOP DB2 MODE=QUIESCE CONNECT DC CL12CONNECT Name of a CAF service. Must be CL12! OPEN DC CL12OPEN Name of a CAF service. Must be CL12! CLOSE DC CL12CLOSE Name of a CAF service. Must be CL12! DISCON DC CL12DISCONNECT Name of a CAF service. Must be CL12! TRANSLAT DC CL12TRANSLATE Name of a CAF service. Must be CL12! SYNC DC CL4SYNC Termination option (COMMIT) ABRT DC CL4ABRT Termination option (ROLLBACK) ****************************** RETURN CODES (R15) FROM CALL ATTACH **** ZERO DC F0 0 FOUR DC F4 4 EIGHT DC F8 8 TWELVE DC F12 12 (Call Attach return code in R15) NUM200 DC F200 200 (User error) NUM204 DC F204 204 (Call Attach system error) ****************************** REASON CODES (R00) FROM CALL ATTACH **** C10205 DC XL400C10205 Call attach could not TRANSLATE C10831 DC XL400C10831 Call attach found a release mismatch C10824 DC XL400C10824 Call attach ready for more input F30002 DC XL400F30002 DB2 subsystem not up F30011 DC XL400F30011 DB2 subsystem not up F30012 DC XL400F30012 DB2 subsystem not up F30025 DC XL400F30025 DB2 is stopping (REASCODE) * * Insert more codes here as necessary for your application * ****************************** SQLCA and RIB ************************** EXEC SQL INCLUDE SQLCA DSNDRIB Get the DB2 Release Information Block ****************************** CALL macro parm list ******************* CAFCALL CALL ,(*,*,*,*,*,*,*,*,*),VL,MF=L
892
Chapter 31. Programming for the Resource Recovery Services attachment facility
An application program can use the Resource Recovery Services attachment facility (RRSAF) to connect to and use DB2 to process SQL statements, commands, or instrumentation facility interface (IFI) calls. Programs that run in z/OS batch, TSO foreground, and TSO background can use RRSAF. RRSAF uses z/OS Transaction Management and Recoverable Resource Manager Services (z/OS RRS). With RRSAF, you can coordinate DB2 updates with updates made by all other resource managers that also use z/OS RRS in an z/OS system. Prerequisite knowledge: Before you consider using RRSAF, you must be familiar with the following z/OS topics: v The CALL macro and standard module linkage conventions v Program addressing and residency options (AMODE and RMODE) v Creating and controlling tasks; multitasking v Functional recovery facilities such as ESTAE, ESTAI, and FRRs v Synchronization techniques such as WAIT/POST. v z/OS RRS functions, such as SRRCMIT and SRRBACK.
RRSAF capabilities
An application program using RRSAF can: v Use the z/OS System Authorization Facility and an external security product, such as RACF, to sign on to DB2 with the authorization ID of an end user. v Sign on to DB2 using a new authorization ID and an existing connection and plan. v Access DB2 from multiple z/OS tasks in an address space. v v v v v v | | | v Switch a DB2 thread among z/OS tasks within a single address space. Access the DB2 IFI. Run with or without the TSO terminal monitor program (TMP). Run without being a subtask of the DSN command processor (or of any DB2 code). Run above or below the 16-MB line. Establish an explicit connection to DB2, through a call interface, with control over the exact state of the connection. Establish an implicit connection to DB2 (with a default subsystem identifier and a default plan name) by using SQL statements or IFI calls without first calling RRSAF. Supply event control blocks (ECBs), for DB2 to post, that signal start-up or termination.
893
v Intercept return codes, reason codes, and abend codes from DB2 and translate them into messages as desired.
Task capabilities
Any task in an address space can establish a connection to DB2 through RRSAF. Number of connections to DB2: Each task control block (TCB) can have only one connection to DB2. A DB2 service request issued by a program that runs under a given task is associated with that tasks connection to DB2. The service request operates independently of any DB2 activity under any other task. Using multiple simultaneous connections can increase the possibility of deadlocks and DB2 resource contention. Consider this when you write your application program. Specifying a plan for a task: Each connected task can run a plan. Tasks within a single address space can specify the same plan, but each instance of a plan runs independently from the others. A task can terminate its plan and run a different plan without completely breaking its connection to DB2. Providing attention processing exits and recovery routines: RRSAF does not generate task structures, and it does not provide attention processing exits or functional recovery routines. You can provide whatever attention handling and functional recovery your application needs, but you must use ESTAE/ESTAI type recovery routines only.
Programming language
You can write RRSAF applications in assembler language, C, COBOL, Fortran, and PL/I. When choosing a language to code your application in, consider these restrictions: v If you use z/OS macros (ATTACH, WAIT, POST, and so on), you must choose a programming language that supports them. v The RRSAF TRANSLATE function is not available from Fortran. To use the function, code it in a routine written in another language, and then call that routine from Fortran.
Tracing facility
A tracing facility provides diagnostic messages that help you debug programs and diagnose errors in the RRSAF code. The trace information is available only in a SYSABEND or SYSUDUMP dump.
Program preparation
Preparing your application program to run in RRSAF is similar to preparing it to run in other environments, such as CICS, IMS, and TSO. You can prepare an RRSAF application either in the batch environment or by using the DB2 program preparation process. You can use the program preparation system either through DB2I or through the DSNH CLIST. For examples and guidance in program preparation, see Chapter 21, Preparing an application program to run, on page 471.
RRSAF requirements
When you write an application to use RRSAF, be aware of the following requirements.
894
Program size
The RRSAF code requires about 10-KB of virtual storage per address space and an additional 10-KB for each TCB that uses RRSAF.
Use of LOAD
RRSAF uses z/OS SVC LOAD to load a module as part of the initialization following your first service request. The module is loaded into fetch-protected storage that has the job-step protection key. If your local environment intercepts and replaces the LOAD SVC, then you must ensure that your version of LOAD manages the load list element (LLE) and contents directory entry (CDE) chains like the standard z/OS LOAD macro.
Run environment
Applications that request DB2 services must adhere to several run environment requirements. Those requirements must be met regardless of the attachment facility you use. They are not unique to RRSAF. v The application must be running in TCB mode. v No EUT FRRs can be active when the application requests DB2 services. If an EUT FRR is active, DB2s functional recovery can fail, and your application can receive unpredictable abends. v Different attachment facilities cannot be active concurrently within the same address space. For example: An application should not use RRSAF in CICS or IMS address spaces. An application running in an address space that has a CAF connection to DB2 cannot connect to DB2 using RRSAF. An application running in an address space that has an RRSAF connection to DB2 cannot connect to DB2 using CAF. v One attachment facility cannot start another. This means your RRSAF application cannot use DSN, and a DSN RUN subcommand cannot call your RRSAF application.
Chapter 31. Programming for the Resource Recovery Services attachment facility
895
v The language interface module for RRSAF, DSNRLI, is shipped with the linkage attributes AMODE(31) and RMODE(ANY). If your applications load RRSAF below the 16-MB line, you must link-edit DSNRLI again.
896
processed. SET_ID establishes a new value for the client program ID that can be used to identify the end user. See SET_ID: Syntax and usage on page 922. | | | | | SET_CLIENT_ID Sets end-user information that is passed to DB2 when the next SQL request is processed. SET_CLIENT_ID establishes new values for the client user ID, the application program name, the workstation name, and the accounting token. See SET_CLIENT_ID: Syntax and usage on page 923. CREATE THREAD Allocates a DB2 plan or package. CREATE THREAD must complete before the application can execute SQL statements. See CREATE THREAD: Syntax and usage on page 926. TERMINATE THREAD Deallocates the plan. See TERMINATE THREAD: Syntax and usage on page 928. TERMINATE IDENTIFY Removes the task as a user of DB2 and, if this is the last or only task in the address space that has a DB2 connection, terminates the address space connection to DB2. See TERMINATE IDENTIFY: Syntax and usage on page 930. TRANSLATE Returns an SQL code and printable text, in the SQLCA, that describes a DB2 error reason code. You cannot call the TRANSLATE function from the Fortran language. See TRANSLATE: Syntax and usage on page 931. | | | | | | | | | | | | | | | | | | | | | | | | |
Implicit connections
If you do not explicitly specify the IDENTIFY function in a CALL DSNRLI statement, RRSAF initiates an implicit connection to DB2 if the application includes SQL statements or IFI calls. An implicit connection causes RRSAF to initiate implicit IDENTIFY and CREATE THREAD requests to DB2. Although RRSAF performs the connection request by using the following default values, the request is subject to the same DB2 return codes and reason codes as are explicitly specified requests. Implicit connections use the following defaults: Subsystem name The default name specified in the module DSNHDECP. RRSAF uses the installation default DSNHDECP, unless your own DSNHDECP is in a library in a STEPLIB of JOBLIB concatenation, or in the link list. In a data sharing group, the default subsystem name is the group attachment name. Plan name The member name of the database request module (DBRM) that DB2 produced when you precompiled the source program that contains the first SQL call. If your program can make its first SQL call from different modules with different DBRMs, you cannot use a default plan name; you must use an explicit call using the CREATE THREAD function. If your application includes both SQL and IFI calls, you must issue at least one SQL call before you issue any IFI calls. This ensures that your application uses the correct plan. Authorization ID The 7-byte user ID that is associated with the address space, unless an
Chapter 31. Programming for the Resource Recovery Services attachment facility
897
| | | | | | | | | | | | | | | | | | | | | | | |
authorized function has built an Accessor Environment Element (ACEE) for the address space. If an authorized function has built an ACEE, DB2 passes the 8-byte user ID from the ACEE. For an implicit connection request, your application should not explicitly specify either IDENTIFY or CREATE THREAD. It can execute other explicit RRSAF calls after the implicit connection. An implicit connection does not perform any SIGNON processing. Your application can execute SIGNON at any point of consistency. To terminate an implicit connection, you must use the proper calls. See Summary of RRSAF behavior on page 902 for details. Your application program must successfully connect, either implicitly or explicitly, to DB2 before it can execute any SQL calls to the RRSAF DSNHLI entry point. Therefore, the application program must first determine the success or failure of all implicit connection requests. For implicit connection requests, register 15 contains the return code, and register 0 contains the reason code. The return code and reason code are also in the message text for SQLCODE -981. The application program should examine the return and reason codes immediately after the first executable SQL statement within the application program. Two ways to do this are to: v Examine registers 0 and 15 directly. v Examine the SQLCA, and if the SQLCODE is -981, obtain the return and reason code from the message text. The return code is the first token, and the reason code is the second token. If the implicit connection is successful, the application can examine the SQLCODE for the first, and subsequent, SQL statements.
898
Application LOAD DSNRLI LOAD DSNWLIR LOAD DSNRLIR CALL DSNRLI (IDENTIFY) (SWITCH TO) (SIGNON) (AUTH SIGNON) (SET_ID) (SET_CLIENT_ID) (CONTEXT SIGNON) (CREATE THREAD) (TERMINATE THREAD) (TERMINATE IDENTIFY) CALL DSNWLI CALL DSNHLI (SQL calls)
Load
Call
DSNRLI
DSNHLI (dummy application entry point) CALL DSNHLIR (Transfer calls to real RRSAF SQL entry point) DSNWLI (dummy application entry point) CALL DSNWLIR (Transfer calls to real RRSAF IFI)
Figure 247. Sample RRSAF configuration
DB2
DSNWLIR
Part of RRSAF is a DB2 load module, DSNRLI, the RRSAF language interface module. DSNRLI has the alias names DSNHLIR and DSNWLIR. The module has five entry points: DSNRLI, DSNHLI, DSNHLIR, DSNWLI, and DSNWLIR:
Chapter 31. Programming for the Resource Recovery Services attachment facility
899
v Entry point DSNRLI handles explicit DB2 connection service requests. v DSNHLI and DSNHLIR handle SQL calls. Use DSNHLI if your application program link-edits RRSAF; use DSNHLIR if your application program loads RRSAF. v DSNWLI and DSNWLIR handle IFI calls. Use DSNWLI if your application program link-edits RRSAF; use DSNWLIR if your application program loads RRSAF. You can access the DSNRLI module by explicitly issuing LOAD requests when your program runs, or by including the DSNRLI module in your load module when you link-edit your program. There are advantages and disadvantages to each approach.
Link-editing DSNRLI
You can include DSNRLI when you link-edit your load module. For example, you can use a linkage editor control statement like this in your JCL:
INCLUDE DB2LIB(DSNRLI).
By coding this statement, you avoid linking the wrong language interface module. When you include DSNRLI during the link-edit, you do not include a dummy DSNHLI entry point in your program or specify the precompiler option ATTACH. Module DSNRLI contains an entry point for DSNHLI, which is identical to DSNHLIR, and an entry point DSNWLI, which is identical to DSNWLIR. A disadvantage of link-editing DSNRLI into your load module is that if IBM makes a change to DSNRLI, you must link-edit your program again.
900
Connection name and connection type: The connection name and connection type are RRSAF. You can use the DISPLAY THREAD command to list RRSAF applications that have the connection name RRSAF. Authorization id: Each DB2 connection is associated with a set of authorization IDs. A connection must have a primary ID, and can have one or more secondary IDs. Those identifiers are used for: v Validating access to DB2 v Checking privileges on DB2 objects v Assigning ownership of DB2 objects v Identifying the user of a connection for audit, performance, and accounting traces. RRSAF relies on the z/OS System Authorization Facility (SAF) and a security product, such as RACF, to verify and authorize the authorization IDs. An application that connects to DB2 through RRSAF must pass those identifiers to SAF for verification and authorization checking. RRSAF retrieves the identifiers from SAF. A location can provide an authorization exit routine for a DB2 connection to change the authorization IDs and to indicate whether the connection is allowed. The actual values assigned to the primary and secondary authorization IDs can differ from the values provided by a SIGNON or AUTH SIGNON request. A sites DB2 signon exit routine can access the primary and secondary authorization IDs and can modify the IDs to satisfy the sites security requirements. The exit can also indicate whether the signon request should be accepted. For information about authorization IDs and the connection and signon exit routines, see Appendix B (Volume 2) of DB2 Administration Guide. Scope: The RRSAF processes connections as if each task is entirely isolated. When a task requests a function, RRSAF passes the function to DB2, regardless of the connection status of other tasks in the address space. However, the application program and the DB2 subsystem have access to the connection status of multiple tasks in an address space. Do not mix RRSAF connections with other connection types in a single address space. The first connection to DB2 made from an address space determines the type of connection allowed.
Task termination
If an application that is connected to DB2 through RRSAF terminates normally before the TERMINATE THREAD or TERMINATE IDENTIFY functions deallocate the plan, then RRS commits any changes made after the last commit point. If the application terminates abnormally before the TERMINATE THREAD or TERMINATE IDENTIFY functions deallocate the plan, then z/OS RRS rolls back any changes made after the last commit point. In either case, DB2 deallocates the plan, if necessary, and terminates the applications connection.
DB2 abend
If DB2 abends while an application is running, DB2 rolls back changes to the last commit point. If DB2 terminates while processing a commit request, DB2 either
Chapter 31. Programming for the Resource Recovery Services attachment facility
901
commits or rolls back any changes at the next restart. The action taken depends on the state of the commit request when DB2 terminates.
X'00C12204' X'00C12217' CREATE THREAD CREATE THREAD X'00C12202' CREATE THREAD X'00C12202'
| SIGNON, AUTH SIGNON, X'00F30049' | or CONTEXT SIGNON | CREATE THREAD | TERMINATE THREAD | IFI | SQL | SRRCMIT or SRRBACK
X'00F30049' X'00C12201' X'00F30049' X'00F30049' X'00F30049'
1 1 1 2
X'00F30092' Signon
1
X'00C12202' X'00C12202'
| Notes: | 1. Signon means the signon to DB2 through either SIGNON, AUTH SIGNON, or CONTEXT SIGNON. | 2. SIGNON, AUTH SIGNON, or CONTEXT SIGNON are not allowed if any SQL operations are requested after CREATE THREAD or after the last SRRCMIT or SRRBACK request. | | Table 144 summarizes RRSAF behavior when the next call is the SQL or IFI, TERMINATE THREAD, TERMINATE IDENTIFY, or TRANSLATE function. | Table 144. Effect of call order when next call is SQL or IFI, TERMINATE THREAD, TERMINATE IDENTIFY, or | TRANSLATE | | Previous function | Empty: first call | IDENTIFY | SWITCH TO
SQL or IFI SQL or IFI call SQL or IFI call SQL or IFI call
3 3 3
Next function TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE X'00C12204' X'00C12203' TERMINATE THREAD X'00C12204' TERMINATE IDENTIFY TERMINATE IDENTIFY X'00C12204' TRANSLATE TRANSLATE
902
| | | | | | | | | | | | | | | | | | | | | |
Table 144. Effect of call order when next call is SQL or IFI, TERMINATE THREAD, TERMINATE IDENTIFY, or TRANSLATE (continued) Next function Previous function SIGNON, AUTH SIGNON, or CONTEXT SIGNON CREATE THREAD TERMINATE THREAD IFI SQL SRRCMIT or SRRBACK Notes: 1. TERMINATE THREAD is not allowed if any SQL operations are requested after CREATE THREAD or after the last SRRCMIT or SRRBACK request. 2. TERMINATE IDENTIFY is not allowed if any SQL operations are requested after CREATE THREAD or after the last SRRCMIT or SRRBACK request. 3. Using implicit connect with SQL or IFI calls causes RRSAF to issue an implicit IDENTIFY and CREATE THREAD. If you continue with explicit RRSAF statements after an implicit connect, you must follow the standard order of explicit RRSAF calls. Implicit connect does not issue a SIGNON. Therefore, you might need to issue an explicit SIGNON to satisfy the standard order requirement. For example, an SQL statement followed by an explicit TERMINATE THREAD requires an explicit SIGNON before issuing the TERMINATE THREAD. SQL or IFI SQL or IFI call SQL or IFI call SQL or IFI call SQL or IFI call SQL or IFI call SQL or IFI call
3
TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE TERMINATE THREAD TERMINATE THREAD X'00C12203' TERMINATE THREAD X'00F30093'
1
3 3 3 3 3
TERMINATE THREAD
TERMINATE IDENTIFY
| |
Register conventions
Table 145 on page 904 summarizes the register conventions for RRSAF calls. If you do not specify the return code and reason code parameters in your RRSAF calls, RRSAF puts a return code in register 15 and a reason code in register 0. If you specify the return code and reason code parameters, RRSAF places the return code in register 15 and in the return code parameter to accommodate high-level languages that support special return code processing. RRSAF preserves the
Chapter 31. Programming for the Resource Recovery Services attachment facility
903
For all languages: When you code CALL DSNRLI statements in any language, specify all parameters that come before the return code parameter. You cannot omit any of those parameters by coding zeros or blanks. There are no defaults for those parameters. All parameters starting with Return Code are optional. For all languages except assembler language: Code 0 for an optional parameter in the CALL DSNRLI statement when you want to use the default value for that parameter but specify subsequent parameters. For example, suppose you are coding an IDENTIFY call in a COBOL program. You want to specify all parameters except the return code parameter. Write the call in this way:
CALL DSNRLI USING IDFYFN SSNM RIBPTR EIBPTR TERMECB STARTECB BY CONTENT ZERO BY REFERENCE REASCODE.
904
Parameters point to the following areas: function An 18-byte area containing IDENTIFY followed by 10 blanks. ssnm A 4-byte DB2 subsystem name or group attachment name (if used in a data sharing group) to which the connection is made. If ssnm is less than four characters long, pad it on the right with blanks to a length of four characters. ribptr A 4-byte area in which RRSAF places the address of the release information block (RIB) after the call. This can be used to determine the release level of the DB2 subsystem to which the application is connected. You can determine the modification level within the release level by examining fields RIBCNUMB and RIBCINFO. If the value in RIBCNUMB is greater than zero, check RIBCINFO for modification levels. If the RIB is not available (for example, if you name a subsystem that does not exist), DB2 sets the 4-byte area to zeros. The area to which ribptr points is below the 16-MB line. This parameter is required, although the application does not need to refer to the returned information. eibptr A 4-byte area in which RRSAF places the address of the environment information block (EIB) after the call. The EIB contains environment information, such as the data sharing group, the member name for the DB2 to which the IDENTIFY request was issued, and whether the subsystem is in new-function mode. If the DB2 subsystem is not in a data sharing group, then RRSAF sets the data sharing group and member names to blanks. If the EIB is not available (for example, if ssnm names a subsystem that does not exist), RRSAF sets the 4-byte area to zeros. The area to which eibptr points is above the 16-MB line. This parameter is required, although the application does not need to refer to the returned information. termecb The address of the applications event control block (ECB) used for DB2 termination. DB2 posts this ECB when the system operator enters the command STOP DB2 or when DB2 is terminating abnormally. Specify a value of 0 if you do not want to use a termination ECB. RRSAF puts a POST code in the ECB to indicate the type of termination as shown in Table 146.
Table 146. Post codes for types of DB2 termination POST code 8 12 16 Termination type QUIESCE FORCE ABTERM
# # # #
startecb The address of the applications startup ECB. If DB2 has not started when the application issues the IDENTIFY call, DB2 posts the ECB when DB2 startup has completed. Enter a value of zero if you do not want to use a startup ECB.
Chapter 31. Programming for the Resource Recovery Services attachment facility
905
DB2 posts a maximum of one startup ECB per address space. The ECB posted is associated with the most recent IDENTIFY call from that address space. The application program must examine any nonzero RRSAF or DB2 reason codes before issuing a WAIT on this ECB. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places a reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode or its default (by specifying a comma or zero, depending on the language). groupoverride An 8-byte area that the application provides. This field is optional. If this field is provided, it contains the string NOGROUP. This string indicates that the subsystem name that is specified by ssnm is to be used as a DB2 subsystem name, even if ssnm matches a group attachment name. If groupoverride is not provided, ssnm is used as the group attachment name if it matches a group attachment name. If you specify this parameter in any language except assembler, you must also specify the return code and reason code parameters. In assembler language, you can omit the return code and reason code parameters by specifying commas as place-holders.
Usage
IDENTIFY establishes the callers task as a user of DB2 services. If no other task in the address space currently is connected to the subsystem named by ssnm, then IDENTIFY also initializes the address space to communicate with the DB2 address spaces. IDENTIFY establishes the cross-memory authorization of the address space to DB2 and builds address space control blocks. During IDENTIFY processing, DB2 determines whether the user address space is authorized to connect to DB2. DB2 invokes the z/OS SAF and passes a primary authorization ID to SAF. That authorization ID is the 7-byte user ID associated with the address space, unless an authorized function has built an ACEE for the address space. If an authorized function has built an ACEE, DB2 passes the 8-byte user ID from the ACEE. SAF calls an external security product, such as RACF, to determine if the task is authorized to use: v The DB2 resource class (CLASS=DSNR) v The DB2 subsystem (SUBSYS=ssnm) v Connection type RRSAF If that check is successful, DB2 calls the DB2 connection exit to perform additional verification and possibly change the authorization ID. DB2 then sets the connection name to RRSAF and the connection type to RRSAF. In a data sharing environment, use the groupoverride parameter on an IDENTIFY call when you want to connect to a specific member of a data sharing group, and the subsystem name of that member is the same as the group attachment name. In general, using the groupoverride parameter is not desirable because it limits the ability to do dynamic workload routing in a Parallel Sysplex.
906
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
Parameters point to the following areas: function An 18-byte area containing SWITCH TO followed by nine blanks. ssnm A 4-byte DB2 subsystem name or group attachment name (if used in a data
Chapter 31. Programming for the Resource Recovery Services attachment facility
907
sharing group) to which the connection is made. If ssnm is less than four characters long, pad it on the right with blanks to a length of four characters. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode. groupoverride An 8-byte area that the application provides. This field is optional. If this field is provided, it contains the string NOGROUP. This string indicates that the subsystem name that is specified by ssnm is to be used as a DB2 subsystem name, even if ssnm matches a group attachment name. If groupoverride is not provided, ssnm is used as the group attachment name if it matches a group attachment name. If you specify this parameter in any language except assembler, you must also specify the return code and reason code parameters. In assembler language, you can omit the return code and reason code parameters by specifying commas as place-holders.
Usage
Use SWITCH TO to establish connections to multiple DB2 subsystems from a single task. If you make a SWITCH TO call to a DB2 subsystem before you have issued an initial IDENTIFY call, DB2 returns return Code 4 and reason code X'00C12205' as a warning that the task has not yet identified to any DB2 subsystem. After you establish a connection to a DB2 subsystem, you must make a SWITCH TO call before you identify to another DB2 subsystem. If you do not make a SWITCH TO call before you make an IDENTIFY call to another DB2 subsystem, then DB2 returns return Code = X'200' and reason code X'00C12201'. In a data sharing environment, use the groupoverride parameter on an SWITCH TO call when you want to connect to a specific member of a data sharing group, and the subsystem name of that member is the same as the group attachment name. In general, using the groupoverride parameter is not desirable because it limits the ability to do dynamic workload routing in a Parallel Sysplex. This example shows how you can use SWITCH TO to interact with three DB2 subsystems.
RRSAF calls for subsystem db21: IDENTIFY SIGNON CREATE THREAD Execute SQL on subsystem db21 SWITCH TO db22 RRSAF calls on subsystem db22: IDENTIFY SIGNON CREATE THREAD Execute SQL on subsystem db22 SWITCH TO db23 RRSAF calls on subsystem db23:
908
IDENTIFY SIGNON CREATE THREAD Execute SQL on subsystem 23 SWITCH TO db21 Execute SQL on subsystem 21 SWITCH TO db22 Execute SQL on subsystem 22 SWITCH TO db21 Execute SQL on subsystem 21 SRRCMIT (to commit the UR) SWITCH TO db23 Execute SQL on subsystem 23 SWITCH TO db22 Execute SQL on subsystem 22 SWITCH TO db21 Execute SQL on subsystem 21 SRRCMIT (to commit the UR)
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
Chapter 31. Programming for the Resource Recovery Services attachment facility
909
Parameters point to the following areas: function An 18-byte area containing SIGNON followed by twelve blanks. correlation-id A 12-byte area in which you can put a DB2 correlation ID. The correlation ID is displayed in DB2 accounting and statistics trace records. You can use the correlation ID to correlate work units. This token appears in output from the command DISPLAY THREAD. If you do not want to specify a correlation ID, fill the 12-byte area with blanks. # # # # # # # # # # # # accounting-token A 22-byte area in which you can put a value for a DB2 accounting token. This value is displayed in DB2 accounting and statistics trace records in the QWHCTOKEN field, which is mapped by DSNDQWHC DSECT. Setting the value of the accounting token sets the value of the CURRENT CLIENT_ACCTNG special register. If accounting-token is less than 22 characters long, you must pad it on the right with blanks to a length of 22 characters. If you do not want to specify an accounting token, fill the 22-byte area with blanks. You can also change the value of the DB2 accounting token with RRS AUTH SIGNON, CONTEXT SIGNON or SET_CLIENT_ID. You can retrieve the DB2 accounting token with the CURRENT CLIENT_ACCTNG special register only if the DDF accounting string is not set. accounting-interval A 6-byte area with which you can control when DB2 writes an accounting record. # # # # # # # If you specify COMMIT in that area, DB2 writes an accounting record each time the application issues SRRCMIT without open held cursors. If the accounting interval is COMMIT and an SRRCMIT is issued while a held cursor is open, the accounting interval spans that commit and ends at the next valid accounting interval end point (such as the next SRRCMIT that is issued without open held cursors, application termination, or SIGNON with a new authorization ID). If you specify any other value, DB2 writes an accounting record when the application terminates or when you call SIGNON with a new authorization ID. retcode A 4-byte area in which RRSAF places the return code.
910
This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode. user A 16-byte area that contains the user ID of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays this user ID in DISPLAY THREAD output and in DB2 accounting and statistics trace records. Setting the user ID sets the value of the CURRENT CLIENT_USERID special register. If user is less than 16 characters long, you must pad it on the right with blanks to a length of 16 characters. This field is optional. If you specify this parameter, you must also specify retcode and reascode. If you do not specify this parameter, no user ID is associated with the connection. appl A 32-byte area that contains the application or transaction name of the end users application. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the application name in the DISPLAY THREAD output and in DB2 accounting and statistics trace records. Setting the application name sets the value of the CURRENT CLIENT_APPLNAME special register. If appl is less than 32 characters long, you must pad it on the right with blanks to a length of 32 characters. This field is optional. If you specify this parameter, you must also specify retcode, reascode, and user. If you do not specify this parameter, no application or transaction is associated with the connection. ws An 18-byte area that contains the workstation name of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the workstation name in the DISPLAY THREAD output and in DB2 accounting and statistics trace records. Setting the workstation name sets the value of the CURRENT CLIENT_WRKSTNNAME special register. If ws is less than 18 characters long, you must pad it on the right with blanks to a length of 18 characters. This field is optional. If you specify this parameter, you must also specify retcode, reascode, user, and appl. If you do not specify this parameter, no workstation name is associated with the connection. xid A 4-byte area into which you put one of the following values: 0 # # # # # 1 Indicates that the thread is not part of a global transaction. The 0 value must be specified as a binary integer. Indicates that the thread is part of a global transaction and that DB2 should retrieve the global transaction ID from RRS. If a global transaction ID already exists for the task, the thread becomes part of the associated global transaction. Otherwise, RRS generates a new global transaction ID. The value 1 must
Chapter 31. Programming for the Resource Recovery Services attachment facility
911
# # # address
be specified as a binary integer. Alternatively, if you want DB2 to return the generated global transaction ID to the caller, specify an address instead of 1. The 4-byte address of an area into which you enter a global transaction ID for the thread. If the global transaction ID already exists, the thread becomes part of the associated global transaction. Otherwise, RRS creates a new global transaction with the ID that you specify. The global transaction ID has the format shown in Table 149.
# # # # # # # #
However, if you want DB2 to generate and return a global transaction ID, pass the address of a null global transaction ID by setting the format ID field, which is shown in Table 149, to binary -1 ('FFFFFFF'X). DB2 then replaces the contents of the area with the generated transaction ID. The area at the specified address must be in writable storage and have a length of at least 140 bytes to accommodate the largest possible transaction ID value.
Table 149. Format of a user-created global transaction ID Field description Length in bytes 4 4 4 1 to 64 1 to 64 Data type Integer Integer Integer Character Character
# # #
Format ID Global transaction ID length (1 - 64) Branch qualifier length (1 - 64) Global transaction ID Branch qualifier
A DB2 thread that is part of a global transaction can share locks with other DB2 threads that are part of the same global transaction and can access and modify the same data. A global transaction exists until one of the threads that is part of the global transaction is committed or rolled back. # # # # # # # # # # # # # # # # # accounting-string A one-byte length field and a 255-byte area in which you can put a value for a DB2 accounting string. This value is placed in the DDF accounting trace records in the QMDASQLI field, which is mapped by DSNDQMDA DSECT. If accounting-string is less than 255 characters, you must pad it on the right with zeros to a length of 255 bytes. The entire 256 bytes is mapped by DSNDQMDA DSECT. This field is optional. If you specify this parameter, you must also specify retcode, reascode, user, appl and xid. If you do not specify this parameter, no accounting string is associated with the connection. You can specify this field only in DB2 Version 8 new-function mode. You can also change the value of the accounting string with RRS AUTH SIGNON, CONTEXT SIGNON, or SET_CLIENT_ID. You can retrieve the DDF suffix portion of the accounting string with the CURRENT CLIENT_ACCTNG special register. The suffix portion of accounting-string can contain a maximum of 200 characters. The QMDASFLN field contains the accounting suffix length, and the QMDASUFX field contains
912
# # #
the accounting suffix value. If the DDF accounting string is set, you cannot query the accounting token with the CURRENT CLIENT_ACCTNG special register.
Usage
SIGNON causes a new primary authorization ID and an optional secondary authorization IDs to be assigned to a connection. Your program does not need to be an authorized program to issue the SIGNON call. For that reason, before you issue the SIGNON call, you must issue the external security interface macro RACROUTE REQUEST=VERIFY to do the following: v Define and populate an ACEE to identify the user of the program. v Associate the ACEE with the users TCB. v Verify that the user is defined to RACF and authorized to use the application. See z/OS Security Server RACF Macros and Interfaces for more information about the RACROUTE macro. Generally, you issue a SIGNON call after an IDENTIFY call and before a CREATE THREAD call. You can also issue a SIGNON call if the application is at a point of consistency, and one of the following conditions is true: v The value of reuse in the CREATE THREAD call was RESET. v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors are open, the package or plan is bound with KEEPDYNAMIC(NO), and all special registers are at their initial state. If there are open held cursors or the package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted only if the primary authorization ID has not changed. Table 150 shows a SIGNON call in each language.
Table 150. Examples of RRSAF SIGNON calls Language assembler C COBOL Fortran PL/I Call example CALL DSNRLI,(SGNONFN,CORRID,ACCTTKN,ACCTINT, RETCODE,REASCODE,USERID,APPLNAME,WSNAME) fnret=dsnrli(&sgnonfn[0], &corrid[0], &accttkn[0], &acctint[0], &retcode, &reascode, &userid[0], &applname[0], &wsname[0]); CALL DSNRLI USING SGNONFN CORRID ACCTTKN ACCTINT RETCODE REASCODE USERID APPLNAME WSNAME. CALL DSNRLI(SGNONFN,CORRID,ACCTTKN,ACCTINT, RETCODE,REASCODE,USERID,APPLNAME,WSNAME) CALL DSNRLI(SGNONFN,CORRID,ACCTTKN,ACCTINT, RETCODE,REASCODE,USERID,APPLNAME,WSNAME);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
Chapter 31. Programming for the Resource Recovery Services attachment facility
913
accounting-interval, primary-authid,
Parameters point to the following areas: function An 18-byte area containing AUTH SIGNON followed by seven blanks. correlation-id A 12-byte area in which you can put a DB2 correlation ID. The correlation ID is displayed in DB2 accounting and statistics trace records. You can use the correlation ID to correlate work units. This token appears in output from the command DISPLAY THREAD. If you do not want to specify a correlation ID, fill the 12-byte area with blanks. # # # # # # # # # # # # accounting-token A 22-byte area in which you can put a value for a DB2 accounting token. This value is displayed in DB2 accounting and statistics trace records in the QWHCTOKEN field, which is mapped by DSNDQWHC DSECT. Setting the value of the accounting token sets the value of the CURRENT CLIENT_ACCTNG special register. If accounting-token is less than 22 characters long, you must pad it on the right with blanks to a length of 22 characters. If you do not want to specify an accounting token, fill the 22-byte area with blanks. You can also change the value of the DB2 accounting token with RRS SIGNON, CONTEXT SIGNON or SET_CLIENT_ID. You can retrieve the DB2 accounting token with the CURRENT CLIENT_ACCTNG special register only if the DDF accounting string is not set.
914
accounting-interval A 6-byte area with which you can control when DB2 writes an accounting record. # # # # # # # If you specify COMMIT in that area, DB2 writes an accounting record each time the application issues SRRCMIT without open held cursors. If the accounting interval is COMMIT and an SRRCMIT is issued while a held cursor is open, the accounting interval spans that commit and ends at the next valid accounting interval end point (such as the next SRRCMIT that is issued without open held cursors, application termination, or SIGNON with a new authorization ID). If you specify any other value, DB2 writes an accounting record when the application terminates or when you call SIGNON with a new authorization ID. primary-authid An 8-byte area in which you can put a primary authorization ID. If you are not passing the authorization ID to DB2 explicitly, put X'00' or a blank in the first byte of the area. ACEE-address The 4-byte address of an ACEE that you pass to DB2. If you do not want to provide an ACEE, specify 0 in this field. secondary-authid An 8-byte area in which you can put a secondary authorization ID. If you do not pass the authorization ID to DB2 explicitly, put X'00' or a blank in the first byte of the area. If you enter a secondary authorization ID, you must also enter a primary authorization ID. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode. user A 16-byte area that contains the user ID of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays this user ID in DISPLAY THREAD output and in DB2 accounting and statistics trace records. Setting the user ID sets the value of the CURRENT CLIENT_USERID special register. If user is less than 16 characters long, you must pad it on the right with blanks to a length of 16 characters. This field is optional. If you specify this parameter, you must also specify retcode and reascode. If you do not specify this parameter, no user ID is associated with the connection. appl A 32-byte area that contains the application or transaction name of the end users application. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the application name in the DISPLAY THREAD output and in DB2 accounting and
Chapter 31. Programming for the Resource Recovery Services attachment facility
915
statistics trace records. Setting the application name sets the value of the CURRENT CLIENT_APPLNAME special register. If appl is less than 32 characters long, you must pad it on the right with blanks to a length of 32 characters. This field is optional. If you specify this parameter, you must also specify retcode, reascode, and user. If you do not specify this parameter, no application or transaction is associated with the connection. # # # # # # # # # # # # # xid A 4-byte area into which you put one of the following values: 0 # # # # # # # # 1 Indicates that the thread is not part of a global transaction. The 0 value must be specified as a binary integer. Indicates that the thread is part of a global transaction and that DB2 should retrieve the global transaction ID from RRS. If a global transaction ID already exists for the task, the thread becomes part of the associated global transaction. Otherwise, RRS generates a new global transaction ID. The value 1 must be specified as a binary integer. Alternatively, if you want DB2 to return the generated global transaction ID to the caller, specify an address instead of 1. The 4-byte address of an area into which you enter a global transaction ID for the thread. If the global transaction ID already exists, the thread becomes part of the associated global transaction. Otherwise, RRS creates a new global transaction with the ID that you specify. The global transaction ID has the format shown in Table 149 on page 912. # # # # # # # # However, if you want DB2 to generate and return a global transaction ID, pass the address of a null global transaction ID by setting the format ID field, which is shown in Table 149 on page 912, to binary -1 ('FFFFFFF'X). DB2 then replaces the contents of the area with the generated transaction ID. The area at the specified address must be in writable storage and have a length of at least 140 bytes to accommodate the largest possible transaction ID value. A DB2 thread that is part of a global transaction can share locks with other DB2 threads that are part of the same global transaction and can access and ws An 18-byte area that contains the workstation name of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the workstation name in the DISPLAY THREAD output and in DB2 accounting and statistics trace records. Setting the workstation name sets the value of the CURRENT CLIENT_WRKSTNNAME special register. If ws is less than 18 characters long, you must pad it on the right with blanks to a length of 18 characters. This field is optional. If you specify this parameter, you must also specify retcode, reascode, user, and appl. If you do not specify this parameter, no workstation name is associated with the connection. You can also change the value of the workstation name with RRS SIGNON, CONTEXT SIGNON or SET_CLIENT_ID. You can retrieve the workstation name with the CURRENT CLIENT_WRKSTNNAME special register.
address
916
modify the same data. A global transaction exists until one of the threads that is part of the global transaction is committed or rolled back. # # # # # # # # # # # # # # # # # # # # accounting-string A one-byte length field and a 255-byte area in which you can put a value for a DB2 accounting string. This value is placed in the DDF accounting trace records in the QMDASQLI field, which is mapped by DSNDQMDA DSECT. If accounting-string is less than 255 characters, you must pad it on the right with zeros to a length of 255 bytes. The entire 256 bytes is mapped by DSNDQMDA DSECT. This field is optional. If you specify this parameter, you must also specify retcode, reascode, user, appl and xid. If you do not specify this parameter, no accounting string is associated with the connection. You can specify this field only in DB2 Version 8 new-function mode. You can also change the value of the accounting string with RRS SIGNON, CONTEXT SIGNON, or SET_CLIENT_ID. You can retrieve the DDF suffix portion of the accounting string with the CURRENT CLIENT_ACCTNG special register. The suffix portion of accounting-string can contain a maximum of 200 characters. The QMDASFLN field contains the accounting suffix length, and the QMDASUFX field contains the accounting suffix value. If the DDF accounting string is set, you cannot query the accounting token with the CURRENT CLIENT_ACCTNG special register.
Usage
AUTH SIGNON causes a new primary authorization ID and optional secondary authorization IDs to be assigned to a connection. Generally, you issue an AUTH SIGNON call after an IDENTIFY call and before a CREATE THREAD call. You can also issue an AUTH SIGNON call if the application is at a point of consistency, and one of the following conditions is true: v The value of reuse in the CREATE THREAD call was RESET. v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors are open, the package or plan is bound with KEEPDYNAMIC(NO), and all special registers are at their initial state. If there are open held cursors or the package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted only if the primary authorization ID has not changed. Table 151 shows a AUTH SIGNON call in each language.
Table 151. Examples of RRSAF AUTH SIGNON calls Language Assembler C COBOL Fortran PL/I Call example CALL DSNRLI,(ASGNONFN,CORRID,ACCTTKN,ACCTINT,PAUTHID,ACEEPTR, SAUTHID,RETCODE,REASCODE,USERID,APPLNAME,WSNAME) fnret=dsnrli(&asgnonfn[0], &corrid[0], &accttkn[0], &acctint[0], &pauthid[0], &aceeptr, &sauthid[0], &retcode, &reascode, &userid[0], &applname[0], &wsname[0]); CALL DSNRLI USING ASGNONFN CORRID ACCTTKN ACCTINT PAUTHID ACEEPTR SAUTHID RETCODE REASCODE USERID APPLNAME WSNAME. CALL DSNRLI(ASGNONFN,CORRID,ACCTTKN,ACCTINT,PAUTHID,ACEEPTR, SAUTHID,RETCODE,REASCODE,USERID,APPLNAME,WSNAME) CALL DSNRLI(ASGNONFN,CORRID,ACCTTKN,ACCTINT,PAUTHID,ACEEPTR, SAUTHID,RETCODE,REASCODE,USERID,APPLNAME,WSNAME);
Chapter 31. Programming for the Resource Recovery Services attachment facility
917
Table 151. Examples of RRSAF AUTH SIGNON calls (continued) Language Call example
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
Parameters point to the following areas: function An 18-byte area containing CONTEXT SIGNON followed by four blanks. correlation-id A 12-byte area in which you can put a DB2 correlation ID. The correlation ID is displayed in DB2 accounting and statistics trace records. You can use the correlation ID to correlate work units. This token appears in output from the command DISPLAY THREAD. If you do not want to specify a correlation ID, fill the 12-byte area with blanks. # # # # # # # # accounting-token A 22-byte area in which you can put a value for a DB2 accounting token. This value is displayed in DB2 accounting and statistics trace records in the QWHCTOKEN field, which is mapped by DSNDQWHC DSECT. Setting the value of the accounting token sets the value of the CURRENT CLIENT_ACCTNG special register. If accounting-token is less than 22 characters long, you must pad it on the right with blanks to a length of 22 characters. If you do not want to specify an accounting token, fill the 22-byte area with blanks.
918
# # # #
You can also change the value of the DB2 accounting token with RRS SIGNON, AUTH SIGNON, or SET_CLIENT_ID. You can retrieve the DB2 accounting token with the CURRENT CLIENT_ACCTNG special register only if the DDF accounting string is not set. accounting-interval A 6-byte area with which you can control when DB2 writes an accounting record.
# # # # # # #
If you specify COMMIT in that area, DB2 writes an accounting record each time the application issues SRRCMIT without open held cursors. If the accounting interval is COMMIT and an SRRCMIT is issued while a held cursor is open, the accounting interval spans that commit and ends at the next valid accounting interval end point (such as the next SRRCMIT that is issued without open held cursors, application termination, or SIGNON with a new authorization ID). If you specify any other value, DB2 writes an accounting record when the application terminates or when you call SIGNON with a new authorization ID. context-key A 32-byte area in which you put the context key that you specified when you called the RRS Set Context Data (CTXSDTA) service to save the primary authorization ID and an optional ACEE address. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode. user A 16-byte area that contains the user ID of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays this user ID in DISPLAY THREAD output and in DB2 accounting and statistics trace records. Setting the user ID sets the value of the CURRENT CLIENT_USERID special register. If user is less than 16 characters long, you must pad it on the right with blanks to a length of 16 characters. This field is optional. If you specify this parameter, you must also specify retcode and reascode. If you do not specify this parameter, no user ID is associated with the connection. appl A 32-byte area that contains the application or transaction name of the end users application. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the application name in the DISPLAY THREAD output and in DB2 accounting and statistics trace records. Setting the application name sets the value of the CURRENT CLIENT_APPLNAME special register. If appl is less than 32 characters long, you must pad it on the right with blanks to a length of 32 characters.
Chapter 31. Programming for the Resource Recovery Services attachment facility
919
This field is optional. If you specify this parameter,you must also specify retcode, reascode, and user. If you do not specify this parameter, no application or transaction is associated with the connection. # # # # # # # # # # # # # xid A 4-byte area into which you put one of the following values: 0 # # # # # # # # 1 Indicates that the thread is not part of a global transaction. The 0 value must be specified as a binary integer. Indicates that the thread is part of a global transaction and that DB2 should retrieve the global transaction ID from RRS. If a global transaction ID already exists for the task, the thread becomes part of the associated global transaction. Otherwise, RRS generates a new global transaction ID. The value 1 must be specified as a binary integer. Alternatively, if you want DB2 to return the generated global transaction ID to the caller, specify an address instead of 1. The 4-byte address of an area into which you enter a global transaction ID for the thread. If the global transaction ID already exists, the thread becomes part of the associated global transaction. Otherwise, RRS creates a new global transaction with the ID that you specify. The global transaction ID has the format shown in Table 149 on page 912. # # # # # # # # However, if you want DB2 to generate and return a global transaction ID, pass the address of a null global transaction ID by setting the format ID field, which is shown in Table 149 on page 912, to binary -1 ('FFFFFFF'X). DB2 then replaces the contents of the area with the generated transaction ID. The area at the specified address must be in writable storage and have a length of at least 140 bytes to accommodate the largest possible transaction ID value. A DB2 thread that is part of a global transaction can share locks with other DB2 threads that are part of the same global transaction and can access and modify the same data. A global transaction exists until one of the threads that is part of the global transaction is committed or rolled back. # # # accounting-string A one-byte length field and a 255-byte area in which you can put a value for a DB2 accounting string. This value is placed in the DDF accounting trace ws An 18-byte area that contains the workstation name of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the workstation name in the DISPLAY THREAD output and in DB2 accounting and statistics trace records. Setting the workstation name sets the value of the CURRENT CLIENT_WRKSTNNAME special register. If ws is less than 18 characters long, you must pad it on the right with blanks to a length of 18 characters. This field is optional. If you specify this parameter, you must also specify retcode, reascode, user, and appl. If you do not specify this parameter, no workstation name is associated with the connection. You can also change the value of the workstation name with RRS SIGNON, AUTH SIGNON, or SET_CLIENT_ID. You can retrieve the workstation name with the CLIENT_WRKSTNNAME special register.
address
920
# # # # # # # # # # # # # # # # #
records in the QMDASQLI field, which is mapped by DSNDQMDA DSECT. If accounting-string is less than 255 characters, you must pad it on the right with zeros to a length of 255 bytes. The entire 256 bytes is mapped by DSNDQMDA DSECT. This field is optional. If you specify this parameter, you must also specify retcode, reascode, user, appl and xid. If you do not specify this parameter, no accounting string is associated with the connection. You can specify this field only in DB2 Version 8 new-function mode. You can also change the value of the accounting string with RRS SIGNON, AUTH SIGNON, or SET_CLIENT_ID. You can retrieve the DDF suffix portion of the accounting string with the CURRENT CLIENT_ACCTNG special register. The suffix portion of accounting-string can contain a maximum of 200 characters. The QMDASFLN field contains the accounting suffix length, and the QMDASUFX field contains the accounting suffix value. If the DDF accounting string is set, you cannot query the accounting token with the CURRENT CLIENT_ACCTNG special register.
Usage
CONTEXT SIGNON relies on the RRS context services functions Set Context Data (CTXSDTA) and Retrieve Context Data (CTXRDTA). Before you invoke CONTEXT SIGNON, you must have called CTXSDTA to store a primary authorization ID and optionally, the address of an ACEE in the context data whose context key you supply as input to CONTEXT SIGNON. CONTEXT SIGNON establishes a new primary authorization ID for the connection and optionally causes one or more secondary authorization IDs to be assigned. CONTEXT SIGNON uses the context key to retrieve the primary authorization ID from data associated with the current RRS context. DB2 uses the RRS context services function CTXRDTA to retrieve context data that contains the authorization ID and ACEE address. The context data must have the following format: Version Number A 4-byte area that contains the version number of the context data. Set this area to 1. Server Product Name An 8-byte area that contains the name of the server product that set the context data. ALET A 4-byte area that can contain an ALET value. DB2 does not reference this area.
ACEE Address A 4-byte area that contains an ACEE address or 0 if an ACEE is not provided. DB2 requires that the ACEE is in the home address space of the task. primary-authid An 8-byte area that contains the primary authorization ID to be used. If the authorization ID is less than 8 bytes in length, pad it on the right with blank characters to a length of 8 bytes. If the new primary authorization ID is not different than the current primary authorization ID (established at IDENTIFY time or at a previous SIGNON invocation), DB2 invokes only the signon exit. If the value has changed, then DB2 establishes a new primary authorization ID and new SQL authorization ID and then invokes the signon exit.
Chapter 31. Programming for the Resource Recovery Services attachment facility
921
If you pass an ACEE address, then CONTEXT SIGNON uses the value in ACEEGRPN as the secondary authorization ID if the length of the group name (ACEEGRPL) is not 0. Generally, you issue a CONTEXT SIGNON call after an IDENTIFY call and before a CREATE THREAD call. You can also issue a CONTEXT SIGNON call if the application is at a point of consistency, and one of the following conditions is true: v The value of reuse in the CREATE THREAD call was RESET. v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors are open, the package or plan is bound with KEEPDYNAMIC(NO), and all special registers are at their initial state. If there are open held cursors or the package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted only if the primary authorization ID has not changed. Table 152 shows a CONTEXT SIGNON call in each language.
Table 152. Examples of RRSAF CONTEXT SIGNON calls Language Assembler C COBOL Fortran PL/I Call example CALL DSNRLI,(CSGNONFN,CORRID,ACCTTKN,ACCTINT,CTXTKEY, RETCODE,REASCODE,USERID,APPLNAME,WSNAME) fnret=dsnrli(&csgnonfn[0], &corrid[0], &accttkn[0], &acctint[0], &ctxtkey[0], &retcode, &reascode, &userid[0], &applname[0], &wsname[0]); CALL DSNRLI USING CSGNONFN CORRID ACCTTKN ACCTINT CTXTKEY RETCODE REASCODE USERID APPLNAME WSNAME. CALL DSNRLI(CSGNONFN,CORRID,ACCTTKN,ACCTINT,CTXTKEY, RETCODE,REASCODE, USERID,APPLNAME,WSNAME) CALL DSNRLI(CSGNONFN,CORRID,ACCTTKN,ACCTINT,CTXTKEY, RETCODE,REASCODE,USERID,APPLNAME,WSNAME);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
922
Parameters point to the following areas: function An 18-byte area containing SET_ID followed by 12 blanks. program-id An 80-byte area containing the caller-provided string to be passed to DB2. If program-id is less than 80 characters, you must pad it with blanks on the right to a length of 80 characters. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode.
Usage
SET_ID establishes a new value for program-id that can be used to identify the end user. The calling program defines the contents of program-id. DB2 places the contents of program-id into IFCID 316 records, along with other statistics, so that you can identify which program is associated with a particular SQL statement. Table 153 shows a SET_ID call in each language.
Table 153. Examples of RRSAF SET_ID calls Language Assembler C COBOL Fortran PL/I Call example CALL DSNRLI,(SETIDFN,PROGID,RETCODE,REASCODE) fnret=dsnrli(&setidfn[0], &progid[0], &retcode, &reascode); CALL DSNRLI USING SETIDFN PROGID RETCODE REASCODE. CALL DSNRLI(SETIDFN,PROGID,RETCODE,REASCODE) CALL DSNRLI(SETIDFN,PROGID,RETCODE,REASCODE);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
| # # # #
Chapter 31. Programming for the Resource Recovery Services attachment facility
923
| | | | | | # | |
| | | | | | | | | | | | | | | | | | # # # # # # # # # # | # # # # # # # # # # Parameters point to the following areas: function An 18-byte area containing SET_CLIENT_ID followed by 5 blanks. accounting-token A 22-byte area in which you can put a value for a DB2 accounting token. This value is placed in the DB2 accounting and statistics trace records in the QWHCTOKEN field, which is mapped by DSNDQWHC DSECT. If accounting-token is less than 22 characters long, you must pad it on the right with blanks to a length of 22 characters. You can omit this parameter by specifying a value of 0 in the parameter list. You can also change the value of the DB2 accounting token with RRS SIGNON, AUTH SIGNON, or CONTEXT SIGNON. You can retrieve the DB2 accounting token with the CURRENT CLIENT_ACCTNG special register only if the DDF accounting string is not set. user A 16-byte area that contains the user ID of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 places this user ID in DISPLAY THREAD output and in DB2 accounting and statistics trace records. If user is less than 16 characters long, you must pad it on the right with blanks to a length of 16 characters. You can omit this parameter by specifying a value of 0 in the parameter list. You can also change the value of the client user ID with RRS SIGNON, AUTH SIGNON, or CONTEXT SIGNON. You can retrieve the client user ID with the CLIENT_USERID special register. appl An 32-byte area that contains the application or transaction name of the end users application. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 places the application name in DISPLAY THREAD output and in DB2 accounting and statistics trace records. If appl is less than 32 characters, you must pad it on the right with blanks to a length of 32 characters. You can omit this parameter by specifying a value of 0 in the parameter list. You can also change the value of the application name with RRS SIGNON, AUTH SIGNON, or CONTEXT SIGNON. You can retrieve the application name with the CLIENT_APPLNAME special register.
924
| # # # # # # # # # # | | | | | | | | | # # # # # # # # # # # # # # # # # # # | # # # # |
ws An 18-byte area that contains the workstation name of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 places this workstation name in DISPLAY THREAD output and in DB2 accounting and statistics trace records. If ws is less than 18 characters, you must pad it on the right with blanks to a length of 18 characters. You can omit this parameter by specifying a value of 0 in the parameter list. You can also change the value of the workstation name with RRS SIGNON, AUTH SIGNON, or CONTEXT SIGNON. You can retrieve the workstation name with the CLIENT_WRKSTNNAME special register. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode. accounting-string A one-byte length field and a 255-byte area in which you can put a value for a DB2 accounting string. This value is placed in the DDF accounting trace records in the QMDASQLI field, which is mapped by DSNDQMDA DSECT. If accounting-string is less than 255 characters, you must pad it on the right with zeros to a length of 255 bytes. The entire 256 bytes is mapped by DSNDQMDA DSECT. This field is optional. If you specify this parameter, you must also specify retcode and reascode. If you do not specify this parameter, no accounting string is associated with the connection. You can also change the value of the accounting string with RRS SIGNON, AUTH SIGNON, or CONTEXT SIGNON. You can retrieve the DDF suffix portion of the accounting string with the CURRENT CLIENT_ACCTNG special register. The suffix portion of accounting-string can contain a maximum of 200 characters. The QMDASFLN field contains the accounting suffix length, and the QMDASUFX field contains the accounting suffix value. If the DDF accounting string is set, you cannot query the accounting token with the CURRENT CLIENT_ACCTNG special register.
Usage
SET_CLIENT_ID establishes new values that can be used to identify the end user. The calling program defines the contents of these parameters. DB2 places the parameter values in DISPLAY THREAD output and in DB2 accounting and statistics trace records. Table 154 on page 926 shows a SET_CLIENT_ID call in each language.
Chapter 31. Programming for the Resource Recovery Services attachment facility
925
| Table 154. Examples of RRSAF SET_CLIENT_ID calls | Language | Assembler | C | COBOL | Fortran | PL/I
Call example CALL DSNRLI,(SECLIDFN,ACCT,USER,APPL,WS,RETCODE,REASCODE) fnret=dsnrli(&seclidfn[0], &acct[0], &user[0], &appl[0], &ws[0], &retcode, &reascode); CALL DSNRLI USING SECLIDFN ACCT USER APPL WS RETCODE REASCODE. CALL DSNRLI(SECLIDFN,ACCT,USER,APPL,WS,RETCODE,REASCODE) CALL DSNRLI(SECLIDFN,ACCT,USER,APPL,WS,RETCODE,REASCODE);
| Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in | your C, C++, and PL/I applications: | C | C++ | | | | PL/I | |
#pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
Parameters point to the following areas: function An 18-byte area containing CREATE THREAD followed by five blanks. plan An 8-byte DB2 plan name. If you provide a collection name instead of a plan name, specify the character ? in the first byte of this field. DB2 then allocates a special plan named ?RRSAF and uses the collection parameter. If you do not provide a collection name in the collection field, you must enter a valid plan name in this field. collection An 18-byte area in which you enter a collection name. When you provide a collection name and put the character ? in the plan field, DB2 allocates a plan named ?RRSAF and a package list that contains two entries: v This collection name v An entry that contains * for the location, collection name, and package name
926
If you provide a plan name in the plan field, DB2 ignores the value in this field. reuse An 8-byte area that controls the action DB2 takes if a SIGNON call is issued after a CREATE THREAD call. Specify either of these values in this field: v RESET - to release any held cursors and reinitialize the special registers v INITIAL - to disallow the SIGNON This parameter is required. If the 8-byte area does not contain either RESET or INITIAL, then the default value is INITIAL. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode. pklistptr A 4-byte field that can contain a pointer to a user-supplied data area that contains a list of collection IDs. A collection ID is an SQL identifier of 1 to 128 letters, digits, or the underscore character that identifies a collection of packages. The length of the data area is a maximum of 2050 bytes. The data area contains a 2-byte length field, followed by up to 2048 bytes of collection ID entries, separated by commas. When you specify a pointer to a set of collection IDs (in the pklistptr parameter) and the character ? in the plan parameter, DB2 allocates a plan named ?RRSAF and a package list in the data area that pklistptr points to. If you also specify a value for the collection parameter, DB2 ignores that value. Each collection entry must be of the form collection-ID.*, *.collection-ID.*, or *.*.*. collection-ID and must follow the naming conventions for a collection ID, as specified in Chapter 1 of DB2 Command Reference. This parameter is optional. If you specify this parameter, you must also specify retcode and reascode. If you provide a plan name in the plan field, DB2 ignores the pklistptr value. Using a package list can have a negative impact on performance. For better performance, specify a short package list.
| | | | |
Usage
CREATE THREAD allocates the DB2 resources required to issue SQL or IFI requests. If you specify a plan name, RRSAF allocates the named plan. If you specify ? in the first byte of the plan name and provide a collection name, DB2 allocates a special plan named ?RRSAF and a package list that contains the following entries: v The collection name v An entry that contains * for the location, collection ID, and package name
Chapter 31. Programming for the Resource Recovery Services attachment facility
927
If you specify ? in the first byte of the plan name and specify pklistptr, DB2 allocates a special plan named ?RRSAF and a package list that contains the following entries: v The collection names that you specify in the data area to which pklistptr points v An entry that contains * for the location, collection ID, and package name The collection names are used to locate a package associated with the first SQL statement in the program. The entry that contains *.*.* lets the application access remote locations and access packages in collections other than the default collection that is specified at create thread time. The application can use the SQL statement SET CURRENT PACKAGESET to change the collection ID that DB2 uses to locate a package. When DB2 allocates a plan named ?RRSAF, DB2 checks authorization to execute the package in the same way as it checks authorization to execute a package from a requester other than DB2 UDB for z/OS. See Part 3 (Volume 1) of DB2 Administration Guide for more information about authorization checking for package execution. Table 155 shows a CREATE THREAD call in each language.
Table 155. Examples of RRSAF CREATE THREAD calls Language Assembler C COBOL Fortran PL/I Call example CALL DSNRLI,(CRTHRDFN,PLAN,COLLID,REUSE,RETCODE,REASCODE,PKLISTPTR) fnret=dsnrli(&crthrdfn[0], &plan[0], &collid[0], &reuse[0], &retcode, &reascode, &pklistptr); CALL DSNRLI USING CRTHRDFN PLAN COLLID REUSE RETCODE REASCODE PKLSTPTR. CALL DSNRLI(CRTHRDFN,PLAN,COLLID,REUSE,RETCODE,REASCODE,PKLSTPTR) CALL DSNRLI(CRTHRDFN,PLAN,COLLID,REUSE,RETCODE,REASCODE,PKLSTPTR);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
928
Parameters point to the following areas: function An 18-byte area containing TERMINATE THREAD followed by two blanks. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode.
Usage
TERMINATE THREAD deallocates the DB2 resources associated with a plan. Those resources were previously allocated through CREATE THREAD. You can then use CREATE THREAD to allocate another plan using the same connection. If you issue TERMINATE THREAD, and the application is not at a point of consistency, RRSAF returns reason code X'00C12211'. Table 156 shows a TERMINATE THREAD call in each language.
Table 156. Examples of RRSAF TERMINATE THREAD calls Language Assembler C COBOL Fortran PL/I Call example CALL DSNRLI,(TRMTHDFN,RETCODE,REASCODE) fnret=dsnrli(&trmthdfn[0], &retcode, &reascode); CALL DSNRLI USING TRMTHDFN RETCODE REASCODE. CALL DSNRLI(TRMTHDFN,RETCODE,REASCODE) CALL DSNRLI(TRMTHDFN,RETCODE,REASCODE);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
Chapter 31. Programming for the Resource Recovery Services attachment facility
929
Parameters point to the following areas: function An 18-byte area containing TERMINATE IDENTIFY. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode.
Usage
TERMINATE IDENTIFY removes the calling tasks connection to DB2. If no other task in the address space has an active connection to DB2, DB2 also deletes the control block structures created for the address space and removes the cross-memory authorization. If the application is not at a point of consistency when you issue TERMINATE IDENTIFY, RRSAF returns reason code X'00C12211'. If the application allocated a plan, and you issue TERMINATE IDENTIFY without first issuing TERMINATE THREAD, DB2 deallocates the plan before terminating the connection. Issuing TERMINATE IDENTIFY is optional. If you do not, DB2 performs the same functions when the task terminates. If DB2 terminates, the application must issue TERMINATE IDENTIFY to reset the RRSAF control blocks. This ensures that future connection requests from the task are successful when DB2 restarts. Table 157 on page 931 shows a TERMINATE IDENTIFY call in each language.
930
Table 157. Examples of RRSAF TERMINATE IDENTIFY calls Language Assembler C COBOL Fortran PL/I Call example CALL DSNRLI,(TMIDFYFN,RETCODE,REASCODE) fnret=dsnrli(&tmidfyfn[0], &retcode, &reascode); CALL DSNRLI USING TMIDFYFN RETCODE REASCODE. CALL DSNRLI(TMIDFYFN,RETCODE,REASCODE) CALL DSNRLI(TMIDFYFN,RETCODE,REASCODE);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
Parameters point to the following areas: function An 18-byte area containing the word TRANSLATE followed by nine blanks. sqlca The programs SQL communication area (SQLCA). retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify this parameter, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code.
Chapter 31. Programming for the Resource Recovery Services attachment facility
931
This parameter is optional. If you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode.
Usage
Use TRANSLATE to get a corresponding SQL error code and message text for the DB2 error reason codes that RRSAF returns in register 0 following a CREATE THREAD service request. DB2 places this information in the SQLCODE and SQLSTATE host variables or related fields of the SQLCA. The TRANSLATE function translates codes that begin with X'00F3', but it does not translate RRSAF reason codes that begin with X'00C1'. If you receive error reason code X'00F30040' (resource unavailable) after an OPEN request, TRANSLATE returns the name of the unavailable database object in the last 44 characters of field SQLERRM. If the DB2 TRANSLATE function does not recognize the error reason code, it returns SQLCODE -924 (SQLSTATE 58006) and places a printable copy of the original DB2 function code and the return and error reason codes in the SQLERRM field. The contents of registers 0 and 15 do not change, unless TRANSLATE fails. In this case, register 0 is set to X'00C12204', and register 15 is set to 200. Table 158 shows a TRANSLATE call in each language.
Table 158. Examples of RRSAF TRANSLATE calls Language Assembler C COBOL PL/I Call example CALL DSNRLI,(XLATFN,SQLCA,RETCODE,REASCODE) fnret=dsnrli(&connfn[0], &sqlca, &retcode, &reascode); CALL DSNRLI USING XLATFN SQLCA RETCODE REASCODE. CALL DSNRLI(XLATFN,SQLCA,RETCODE,REASCODE);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in your C, C++, and PL/I applications: C C++ #pragma linkage(dsnrli, OS) extern OS { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
932
TERMINATE IDENTIFY
Chapter 31. Programming for the Resource Recovery Services attachment facility
933
For a context switch operation to associate a task with a DB2 thread, the DB2 thread must have previously performed an identify operation. Therefore, before the thread for user B can be associated with task 1, task 1 must have performed an identify operation. v Task 2 performs two context switch operations to: Disassociate the thread for user D from task 2. Associate the thread for user A with task 2.
Task 1 CTXBEGC (create context a) CTXSWCH(a,0) IDENTIFY SIGNON user A CREATE THREAD (Plan A) SQL ... CTXSWCH(0,a) CTXBEGC (create context c) CTXSWCH(c,0) IDENTIFY SIGNON user C CREATE THREAD (plan C) SQL ... CTXSWCH(b,c) SQL (plan B) ... Task 2 CTXBEGC (create context b) CTXSWCH(b,0) IDENTIFY SIGNON user B CREATE THREAD (plan B) SQL ... CTXSWCH(0,b) CTXBEGC (create context d) CTXSWCH(d,0) IDENTIFY SIGNON user D CREATE THREAD (plan D) SQL ... CTXSWCH(0,d) ... CTXSWCH(a,0) SQL (plan A)
934
935
DSNHLI
Begin CSECT Prologue Get save area address Chain the save areas Chain the save areas Put save area address in R13 Get the address of real DSNHLI Branch to DSNRLI to do an SQL call DSNRLI is in 31-bit mode, so use BASSM to assure that the addressing mode is preserved. Restore R13 (callers save area addr) Restore R14 (return address) Restore R1-12, NOT R0 and R15 (codes)
936
Figure 249 shows declarations for some of the variables that are used in Figure 248.
****************** VARIABLES SET BY APPLICATION *********************** LIRLI DS F DSNRLI entry point address LISQL DS F DSNHLIR entry point address SSNM DS CL4 DB2 subsystem name for IDENTIFY CORRID DS CL12 Correlation ID for SIGNON ACCTTKN DS CL22 Accounting token for SIGNON ACCTINT DS CL6 Accounting interval for SIGNON PLAN DS CL8 DB2 plan name for CREATE THREAD COLLID DS CL18 Collection ID for CREATE THREAD. If * PLAN contains a plan name, not used. REUSE DS CL8 Controls SIGNON after CREATE THREAD CONTROL DS CL8 Action that application takes based * on return code from RRSAF ****************** VARIABLES SET BY DB2 ******************************* STARTECB DS F DB2 startup ECB TERMECB DS F DB2 termination ECB EIBPTR DS F Address of environment info block RIBPTR DS F Address of release info block ****************************** CONSTANTS ****************************** CONTINUE DC CL8CONTINUE CONTROL value: Everything OK IDFYFN DC CL18IDENTIFY Name of RRSAF service SGNONFN DC CL18SIGNON Name of RRSAF service CRTHRDFN DC CL18CREATE THREAD Name of RRSAF service TRMTHDFN DC CL18TERMINATE THREAD Name of RRSAF service TMIDFYFN DC CL18TERMINATE IDENTIFY Name of RRSAF service ****************************** SQLCA and RIB ************************** EXEC SQL INCLUDE SQLCA DSNDRIB Map the DB2 Release Information Block ******************* Parameter list for RRSAF calls ******************** RRSAFCLL CALL ,(*,*,*,*,*,*,*,*),VL,MF=L
Figure 249. Declarations for variables used in the RRSAF connection routine
Chapter 31. Programming for the Resource Recovery Services attachment facility
937
938
| | | |
In addition, you can start and stop the CICS attachment facility from within an application program by using the system programming interface SET DB2CONN. For more information, see the CICS Transaction Server for z/OS System Programming Reference.
939
In this example, the INQUIRE EXITPROGRAM command tests whether the resource manager for SQL, DSNCSQL, is up and running. CICS returns the results in the EIBRESP field of the EXEC interface block (EIB) and in the field whose name is the argument of the CONNECTST parameter (in this case, STST). If the EIBRESP value indicates that the command completed normally and the STST value indicates that the resource manager is available, it is safe to execute SQL statements. For more information about the INQUIRE EXITPROGRAM command, see CICS Transaction Server for z/OS System Programming Reference.
Attention The stormdrain effect is a condition that occurs when a system continues to receive work, even though that system is down. When both of the following conditions are true, the stormdrain effect can occur: v The CICS attachment facility is down. v You are using INQUIRE EXITPROGRAM to avoid AEY9 abends. For more information on the stormdrain effect and how to avoid it, see Chapter 2 of DB2 Data Sharing: Planning and Administration. | | | | | | | If the CICS attachment facility is started and you are using standby mode, you do not need to test whether the CICS attachment facility is up before executing SQL. When an SQL statement is executed, and the CICS attachment facility is in standby mode, the attachment issues SQLCODE -923 with a reason code that indicates that DB2 is not available. See CICS DB2 Guide for information about the STANDBYMODE and CONNECTERROR parameters, and DB2 Codes for an explanation of SQLCODE -923.
940
| | | | # # # # # # # # # # # # # # # # | | | | | | | | | | | | | # # # # #
WebSphere MQ messages
WebSphere MQ uses messages to pass information between applications. Messages consist of the following parts: v The message attributes, which identify the message and its properties. v The message data, which is the application data that is carried in the message.
941
# # # # # # # # # # # # # | | | | # # # # # # # # # # # # # # # # # # # # # # # # # # #
When you send a message by using the AMI or MQI, you must specify following three components: message data Defines what is sent from one program to another. service Defines where the message is going to or coming from. The parameters for managing a queue are defined in the service, which is typically defined by a system administrator. The complexity of the parameters in the service is hidden from the application program. policy Defines how the message is handled. Policies control such items as: v The attributes of the message, for example, the priority. v Options for send and receive operations, for example, whether an operation is part of a unit of work. The default service and policy are set as part of defining the WebSphere MQ configuration for a particular installation of DB2. (This action is typically performed by a system administrator.) DB2 provides the default service DB2.DEFAULT.SERVICE and the default policy DB2.DEFAULT.POLICY. How services and policies are stored and managed depends on whether you are using the AMI or the MQI.
942
# # # # # # # # | # # # # # # | | | | | | | | | | | | | # # | | | | | | | | | | # #
The MQI-based DB2 MQ functions use the policies that are defined in the DB2 table SYSIBM.MQPOLICY_TABLE. This table is user-managed and is typically created and maintained by a system administrator. This table contains a row for each defined policy, including your customized policies and the default policy that is provided by DB2. The application program does not need know the details of the defined policies. When an application program calls an MQI-based DB2 MQ function, the program selects a policy from SYSIBM.MQPOLICY_TABLE by specifying it as a parameter.
943
| | | | | | | | | # # | | | | | | | | | | | | | | | | | # # # # # # # # # # # # # # # # |
The AMI-based DB2 MQ functions use the policies that are defined in AMI configuration files, which are in XML format. These files are typically created and maintained by a system administrator. These files contain all of the defined policies, including your customized policies and any default policies, such as the one that DB2 provides. The application program does not need know the details of the defined policies. When an application program calls an AMI-based DB2 MQ function, the program selects a policy from the AMI configuration file by specifying it as a parameter.
944
| | | | | | | | # # | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | # # | | | | | # #
Table 160. DB2 MQ scalar functions Scalar function MQPUBLISH (publisher-service, service-policy, msg-data, topic-list, correlation-id) Description MQPUBLISH publishes a message, as specified in the msg-data variable, to the WebSphere MQ publisher that is specified in the publisher-service variable. It uses the quality of service policy as specified in the service-policy variable. The topic-list variable specifies a list of topics for the message. The optional correlation-id variable specifies the correlation id that is to be associated with this message. The return value is 1 if successful or 0 if not successful. Restriction: MQPublish uses the AMI only. A version of MQPublish that uses the MQI is not available.. MQREAD (receive-service, service-policy) MQREAD returns a message in a VARCHAR variable from the MQ location specified by receive-service, using the policy defined in service-policy. This operation does not remove the message from the head of the queue but instead returns it. If no messages are available to be returned, a null value is returned. MQREADCLOB returns a message in a CLOB variable from the MQ location specified by receive-service, using the policy defined in service-policy. This operation does not remove the message from the head of the queue but instead returns it. If no messages are available to be returned, a null value is returned. MQRECEIVE returns a message in a VARCHAR variable from the MQ location specified by receive-service, using the policy defined in service-policy. This operation removes the message from the queue. If correlation-id is specified, the first message with a matching correlation identifier is returned; if correlation-id is not specified, the message at the beginning of queue is returned. If no messages are available to be returned, a null value is returned. MQRECEIVECLOB returns a message in a CLOB variable from the MQ location specified by receive-service, using the policy defined in service-policy. This operation removes the message from the queue. If correlation-id is specified, the first message with a matching correlation identifier is returned; if correlation-id is not specified, the message at the head of queue is returned. If no messages are available to be returned, a null value is returned.
MQSEND (send-service, MQSEND sends the data in a VARCHAR or CLOB variable msg-data to the MQ service-policy, msg-data, correlation-id) location specified by send-service, using the policy defined in service-policy. An optional user-defined message correlation identifier can be specified by correlation-id. The return value is 1 if successful or 0 if not successful. MQSUBSCRIBE (subscriber-service, service-policy, topic-list) MQSUBSCRIBE registers interest in WebSphere MQ messages that are published to the list of topics that are specified in the topic-list variable. The subscriber-service variable specifies a logical destination for messages that match the specified list of topics. Messages that match each topic are placed on the queue at the specified destination, using the policy specified in the service-policy variable. These messages can be read or received by issuing a subsequent call to MQREAD, MQREADALL, MQREADCLOB, MQREADALLCLOB, MQRECEIVE, MQRECEIVEALL, MQRECEIVECLOB, or MQRECEIVEALLCLOB. The return value is 1 if successful or 0 if not successful. Restriction: MQSUBSCRIBE uses the AMI only. A version of MQSUBSCRIBE that uses the MQI is not available. MQUNSUBSCRIBE (subscriber-service, service-policy, topic-list) MQUNSUBSCRIBE unregisters previously specified interest in WebSphere MQ messages that are published to the list of topics that are specified in the topic-list variable. The subscriber-service, service-policy, and topic-list variables specify which subscription is to be cancelled. The return value is 1 if successful or 0 if not successful. Restriction: MQUNSUBSCRIBE uses the AMI only. A version of MQSUBSCRIBE that uses the MQI is not available.
945
# Notes: # 1. You can send or receive messages in VARCHAR variables or CLOB variables. The maximum length for a message in a VARCHAR variable is 32 KB. The maximum length for a message in a CLOB variable is 2 MB. # # 2. Restriction: The versions of these MQ functions that are in the DB2MQ1C, DB2MQ1N, DB2MQ2C, and DB2MQ2N schemas are deprecated. (Those functions use the AMI.) Instead use the version of these functions in # the DB2MQ schema. (Those functions use the MQI.) The exceptions are MQPUBLISH, MQSUBSCRIBE, and # MQUNSUBSCRIBE. Although the AMI-based versions of these functions are deprecated, a version of these # functions does not exist in the DB2MQ schema. # | | The following table describes the MQ table functions that DB2 can use.
Description MQREADALL returns a table that contains the messages and message metadata in VARCHAR variables from the MQ location specified by receive-service, using the policy defined in service-policy. This operation does not remove the messages from the queue. If num-rows is specified, a maximum of num-rows messages is returned; if num-rows is not specified, all available messages are returned. MQREADALLCLOB returns a table that contains the messages and message metadata in CLOB variables from the MQ location specified by receive-service, using the policy defined in service-policy. This operation does not remove the messages from the queue. If num-rows is specified, a maximum of num-rows messages is returned; if num-rows is not specified, all available messages are returned. MQRECEIVEALL returns a table that contains the messages and message metadata in VARCHAR variables from the MQ location specified by receive-service, using the policy defined in service-policy. This operation removes the messages from the queue. If correlation-id is specified, only those messages with a matching correlation identifier are returned; if correlation-id is not specified, all available messages are returned. If num-rows is specified, a maximum of num-rows messages is returned; if num-rows is not specified, all available messages are returned. MQRECEIVEALLCLOB returns a table that contains the messages and message metadata in CLOB variables from the MQ location specified by receive-service, using the policy defined in service-policy. This operation removes the messages from the queue. If correlation-id is specified, only those messages with a matching correlation identifier are returned; if correlation-id is not specified, all available messages are returned. If num-rows is specified, a maximum of num-rows messages is returned; if num-rows is not specified, all available messages are returned.
| Table 161. DB2 MQ table functions | Table functions | MQREADALL (receive-service, | service-policy, num-rows) | | | | MQREADALLCLOB (receive-service, | service-policy, num-rows) | | | | | MQRECEIVEALL (receive-service, | service-policy, correlation-id, | num-rows) | | | | | | MQRECEIVEALLCLOB | (receive-service, service-policy, | correlation-id, num-rows) | | | | | # # # # # # # # | | |
Notes: 1. You can send or receive messages in VARCHAR variables or CLOB variables. The maximum length for a message in a VARCHAR variable is 32 KB. The maximum length for a message in a CLOB variable is 2 MB. 2. The first column of the result table of a DB2 MQ table function contains the message. For a description of the other columns, see DB2 SQL Reference. 3. Restriction: The versions of these MQ functions that are in the DB2MQ1C, DB2MQ1N, DB2MQ2C, and DB2MQ2N schemas are deprecated. (Those functions use the AMI.) Instead use the version of these functions in the DB2MQ schema. (Those functions use the MQI.)
The following table describes the MQ functions that DB2 can use to work with XML data.
946
# | | | | | | | | | | | | | | | | | | | | | # # # # # # # # | | | | | | | | | | | | | | | | | | |
You can use the WebSphere MQ XML stored procedures to retrieve an XML document from a message queue, decompose it into untagged data, and store the data in DB2 UDB tables. You can also compose an XML document from DB2 data and send the document to an MQSeries(R) message queue. The following table shows WebSphere MQ XML stored procedures for decomposition. Restriction: All of these DB2 MQ XML decomposition stored procedures have been deprecated.
Table 163. DB2 MQ XML decomposition stored procedures XML decomposition stored procedure DXXMQINSERT and DXXMQINSERTALL Description Decompose incoming XML documents from a message queue and store the data in new or existing database tables. The DXXMQINSERT and DXXMQINSERTALL stored procedures require an enabled XML collection name as input. Decompose incoming XML documents from a message queue and store the data in new or existing database tables. The DXXMQINSERTCLOB and DXXMQINSERTALLCLOB stored procedures require an enabled XML collection name as input. Shred incoming XML documents from a message queue and store the data in new or existing database tables. The DXXMQSHRED and DXXMQSHREDAll stored procedures take a DAD file as input; they do not require an enabled XML collection name as input. Shred incoming XML documents from a message queue and store the data in new or existing database tables. The DXXMQSHREDCLOB and DXXMQSHREDALLCLOB stored procedures take a DAD file as input; they do not require an enabled XML collection name as input.
Chapter 33. WebSphere MQ with DB2
947
| # # # # The following table shows WebSphere MQ XML stored procedures for composition. Restriction: All of these DB2 MQ XML composition stored procedures have been deprecated.
| Table 164. DB2 MQ XML composition stored procedures | XML composition stored | procedure | DXXMQGEN and | DXXMQGENALL | |
Description Generate XML documents from existing database tables and send the generated XML documents to a message queue. The DXXMQGEN and DXXMQGENALL stored procedures take a DAD file as input; they do not require an enabled XML collection name as input.
Generate XML documents from existing database tables and send the generated | DXXMQRETRIEVE and XML documents to a message queue. The DXXMQRETRIEVE and | DXXMQRETRIEVECLOB | DXXMQRETRIEVECLOB stored procedures require an enabled XML collection | name as input. | | See Appendix J, DB2-supplied stored procedures, on page 1127 for more | information about WebSphere MQ stored procedures for composing and | decomposing XML data.
| | # # # # # # # # # | | | # # # | | | | | | |
948
| | | | | | | | | | | | | | | | | | # # # # # # # # # # # # # # # # # # # # # # # # # #
that an application error has occurred. The application issues a ROLLBACK after the error occurs, but the message is still delivered to the queue that contains the error messages. In a single-phase commit environment, WebSphere MQ controls its own queue operations. A DB2 COMMIT or ROLLBACK does not affect when or if messages are added to or deleted from an MQ queue.
DB2 MQ tables
The DB2 MQ tables contain service and policy definitions that are used by the MQI-based DB2 MQ functions. You must populate the DB2 MQ tables before you can use these MQI-based functions. The DB2 MQ tables are SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE. These tables are user-managed. You need to create them during the installation or migration process. Sample job DSNTIJMQ creates these tables with one default row in each table. If you previously used the AMI-based DB2 MQ functions, you used AMI configuration files instead of these tables. To use the MQI-based DB2 MQ functions, you need to move the data from those configuration files to the DB2 tables SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE . The following table describes the columns for SYSIBM.MQSERVICE_TABLE.
Column name SERVICENAME Description This column contains the service name, which is an optional input parameter of the MQ functions. This column is the primary key for the SYSIBM.MQSERVICE_TABLE table. QUEUEMANAGER This column contains the name of the queue manager where the MQ functions are to establish a connection. This column contains the name of the queue from which the MQ functions are to send and retrieve messages.
INPUTQUEUE
949
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Description This column contains the character set identifier for character data in the messages that are sent and received by the MQ functions. This column corresponds to the CodedCharSetId field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the CodedCharSetId field. The default value for this column is 0, which sets the CodedCharSetId field of the MQMD to the value MQCCSI_Q_MGR.
ENCODING
This column contains the encoding value for the numeric data in the messages that are sent and received by the MQ functions. This column corresponds to the Encoding field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Encoding field. The default value for this column is 0, which sets the Encoding field in the MQMD to the value MQENC_NATIVE.
DESC_SHORT DESC_LONG
This column contains the short description of the service. This column contains the detailed description of the service.
950
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 165. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name SEND_PERSISTENCE Description This column indicates whether the message persists despite any system failures or instances of restarting the queue manager. This column corresponds to the Persistence field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Persistence field. This column can have the following values: Q Sets the Persistence field in the MQMD to the value MQPER_PERSISTENCE_AS_Q_DEF. This value is the default. Sets the Persistence field in the MQMD to the value MQPER_PERSISTENT. Sets the Persistence field in the MQMD to the value MQPER_NOT_ PERSISTENT.
Y N SEND_EXPIRY
This column contains the message expiration time, in tenths of a second. This column corresponds to the Expiry field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Expiry field. The default value is -1, which sets the Expiry field to the value MQEI_UNLIMITED.
SEND_RETRY_COUNT
This column contains the number of times that the MQ function is to try to send a message if the procedure fails. The default value is 5.
SEND_RETRY_INTERVAL
This column contains the interval, in milliseconds, between each attempt to send a message. The default value is 1000.
951
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 165. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name SEND_NEW_CORRELID Description This column specifies how the correlation identifier is to be set if a correlation identifier is not passed as an input parameter in the MQ function. The correlation identifier is set in the CorrelId field in the message descriptor structure (MQMD). This column can have one of the following values: N Y Sets the CorrelId field in the MQMD to binary zeros. This value is the default. Specifies that the queue manager is to generate a new correlation identifier and set the CorrelId field in the MQMD to that value. This Y value is equivalent to setting the MQPMO_NEW_CORREL_ID option in the Options field in the put message options structure (MQPMO).
SEND_RESPONSE_MSGID
This column specifies how the MsgId field in the message descriptor structure (MQMD) is to be set for report and reply messages. This column corresponds to the Report field in the MQMD. MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Sets the MQRO_NEW_MSG_ID option in the Report field in the MQMD. This value is the default. Sets the MQRO_PASS_MSG_ID option in the Report field in the MQMD.
P SEND_RESPONSE_CORRELID
This column specifies how the CorrelID field in the message descriptor structure (MQMD) is to be set for report and reply messages. This column corresponds to the Report field in the MQMD. MQ functions use the value in this column to set the Report field. This column can have one of the following values: C Sets the MQRO_COPY_MSG_ID_TO_CORREL_ID option in the Report field in the MQMD. This value is the default. Sets the MQRO_PASS_CORREL_ID option in the Report field in the MQMD.
952
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 165. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name SEND_EXCEPTION_ACTION Description This column specifies what to do with the original message when it cannot be delivered to the destination queue. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: Q Sets the MQRO_DEAD_LETTER_Q option in the Report field in the MQMD. This value is the default. Sets the MQRO_DISCARD_MSG option in the Report field in the MQMD. Sets the MQRO_PASS_DISCARD_AND_EXPIRY option in the Report field in the MQMD.
D P
SEND_REPORT_EXCEPTION
This column specifies whether an exception report message is to be generated when a message cannot be delivered to the specified destination queue and if so, what that report message should contain. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that an exception report message is not to be generated. No options in the Report field are set. This value is the default. Sets the MQRO_EXCEPTION option in the Report field in the MQMD. Sets the MQRO_EXCEPTION_WITH_FULL_DATA option in the Report field in the MQMD.
E F
953
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 165. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name SEND_REPORT_COA Description This column specifies whether the queue manager is to send a confirm-on-arrival (COA) report message when the message is placed in the destination queue, and if so, what that COA message is to contain. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that a COA message is not to be sent. No options in the Report field are set. This value is the default. Sets the MQRO_COA option in the Report field in the MQMD. Sets the MQRO_COA_WITH_DATA option in the Report field in the MQMD. Sets the MQRO_COA_WITH_FULL_DATA option in the Report field in the MQMD.
C D F SEND_REPORT_COD
This column specifies whether the queue manager is to send a confirm-on-delivery (COD) report message when an application retrieves and deletes a message from the destination queue, and if so, what that COD message is to contain. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that a COD message is not to be sent. No options in the Report field are set. This value is the default. Sets the MQRO_COD option in the Report field in the MQMD. Sets the MQRO_COD_WITH_DATA option in the Report field in the MQMD. Sets the MQRO_COD_WITH_FULL_DATA option in the Report field in the MQMD.
C D F
954
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 165. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name SEND_REPORT_EXPIRY Description This column specifies whether the queue manager is to send an expiration report message if a message is discarded before it is delivered to an application, and if so, what that message is to contain. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that an expiration report message is not to be sent. No options in the Report field are set.This value is the default. Sets the MQRO_EXPIRATION option in the Report field in the MQMD. Sets the MQRO_EXPIRATION_WITH_DATA option in the Report field in the MQMD. Sets the MQRO_EXPIRATION_WITH_FULL_DATA option in the Report field in the MQMD.
C D F
SEND_REPORT_ACTION
This column specifies whether the receiving application sends a positive action notification (PAN), a negative action notification (NAN), or both. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that neither notification is to be sent. No options in the Report field are set. This value is the default. Sets the MQRO_PAN option in the Report field in the MQMD. Sets the MQRO_NAN option in the Report field in the MQMD. Sets both the MQRO_PAN and MQRO_NAN options in the Report field in the MQMD.
P T B
955
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 165. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name SEND_MSG_TYPE Description This column contains the type of message. This column corresponds to the MsqType field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the MsqType field. This column can have one of the following values: DTG Sets the MsgType field in the MQMD to MQMT_DATAGRAM. This value is the default. REQ Sets the MsgType field in the MQMD to MQMT_REQUEST. RLY Sets the MsgType field in the MQMD to MQMT_REPLY. RPT Sets the MsgType field in the MQMD to MQMT_REPORT. REPLY_TO_Q This column contains the name of the message queue to which the application that issued the MQGET call is to send reply and report messages. This column corresponds to the ReplyToQ field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the ReplyToQ field. The default value for this column is SAME AS INPUT_Q, which sets the name to the queue name that is defined in the service that was used for sending the message. If no service was specified, the name is set to DB2MQ_DEFAULT_Q, which is the name of the input queue for the default service. REPLY_TO_QMGR This column contains the name of the queue manager to which the reply and report messages are to be sent. This column corresponds to the ReplyToQMgr field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the ReplyToQMgr field. The default value for this column is SAME AS INPUT_QMGR, which sets the name to the queue manager name that is defined in the service that was used for sending the message. If no service was specified, the name is set to the name of the queue manager for the default service.
956
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 165. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name RCV_WAIT_INTERVAL Description This column contains the time, in milliseconds, that DB2 is to wait for messages to arrive in the queue. This column corresponds to the WaitInterval field in the get message options structure (MQGMO). MQ functions use the value in this column to set the WaitInterval field. The default is 10. RCV_CONVERT This column indicates whether to convert the application data in the message to conform to the CodedCharSetId and Encoding values that are defined for the queue manager. This column corresponds to the Options field in the get message options structure (MQGMO). MQ functions use the value in this column to set the Options field. This column can have one of the following values: Y Sets the MQGMO_CONVERT option in the Options field in the MQGMO. This value is the default. Specifies that no data is to be converted.
N RCV_ACCEPT_TRUNC_MSG
This column specifies the behavior of the MQ function when oversized messages are retrieved. This column corresponds to the Options field in the get message options structure (MQGMO). MQ functions use the value in this column to set the Options field. This column can have one of the following values: Y ets the MQGMO_ACCEPT_TRUNCATED_MSG option in the Options field in the MQGMO. This value is the default. Specifies that no messages are to be truncated. If the message is too large to fit in the buffer, the MQ function terminates with an error.
Recommendation: Set this column to Y. In this case, if the message buffer is too small to hold the complete message, the MQ function can fill the buffer with as much of the message as the buffer can hold.
957
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # | | | | | | | | | | | | |
Table 165. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name REV_OPEN_SHARED Description This column specifies the input queue mode when messages are retrieved. This column corresponds to the Options parameter for an MQOPEN call. MQ functions use the value in this column to set the Options parameter. This column can have one of the following values: S E D SYNCPOINT Sets the MQOO_INPUT_SHARED option. This value is the default. Sets the MQ option MQOO_INPUT_EXCLUSIVE option. Sets the MQ option MQOO_INPUT_AS_Q_DEF option.
This column indicates whether the MQ function is to operate within the protocol for a normal unit of work. This column can have one of the following values: Y Specifies that the MQ function is to operate within the protocol for a normal unit of work. Use this value for two-phase commit environments. This value is the default. Specifies that the MQ function is to operate outside the protocol for a normal unit of work. Use this value for one-phase commit environments.
DESC_SHORT DESC_LONG
This column contains the short description of the policy. This column contains the long description of the policy.
958
| | | | | | | | | | | | | | | | | | | | | | | # # | | | | | | | | | | | | | | | | | | | | | |
a. Run installation job DSNTIJMQ. This job binds the new MQI-based DB2 MQ functions and creates the tables SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE. b. Convert the contents of the AMI configuration files to rows in the tables SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE. 2. If the application contains unqualified references to DB2 MQ functions, set the CURRENT PATH special register to the schema name DB2MQ. 3. If the application contains qualified references to DB2 MQ functions, change the schema names in those references from the old names (DB2MQ1N, DB2MQ2N, DB2MQ1C, and DB2MQ2C) to DB2MQ. 4. Change the size of any host variables to accommodate for the following larger message sizes: v DB2 MQ functions for VARCHAR data can have a maximum message size of 32 KB. v DB2 MQ functions for CLOB data can have a maximum message size of 2 MB.
Basic messaging
The most basic form of messaging with the DB2 MQ functions occurs when all database applications connect to the same DB2 database server. Clients can be local to the database server or distributed in a network environment. In a simple scenario, client A invokes the MQSEND function to send a user-defined string to the location that is defined by the default service. DB2 executes the MQ functions that perform this operation on the database server. At some later time, client B invokes the MQRECEIVE function to remove the message at the head of the queue that is defined by the default service, and return it to the client. DB2 executes the MQ functions that perform this operation on the database server. Database clients can use simple messaging in a number of ways: v Data collection Information is received in the form of messages from one or more sources. An information source can be any application. The data is received from queues and stored in database tables for additional processing. v Workload distribution Work requests are posted to a queue that is shared by multiple instances of the same application. When an application instance is ready to perform some work, it receives a message that contains a work request from the head of the queue. Multiple instances of the application can share the workload that is represented by a single queue of pooled requests. v Application signaling
959
| | | | | | | | | | | | | | | | | | | | | | | # # # | | | | | | | # # # | | | | | # # |
In a situation where several processes collaborate, messages are often used to coordinate their efforts. These messages might contain commands or requests for work that is to be performed. For more information about this technique, see Application-to-application connectivity on page 962. The following scenario extends basic messaging to incorporate remote messaging. Assume that machine A sends a message to machine B. 1. The DB2 client executes an MQSEND function call, specifying a target service that has been defined to be a remote queue on machine B. 2. The MQ functions perform the work to send the message. The WebSphere MQ server on machine A accepts the message and guarantees that it will deliver it to the destination that is defined by the service and the current MQ configuration of machine A. The server determines that the destination is a queue on machine B. The server then attempts to deliver the message to the WebSphere MQ server on machine B, retrying as needed. 3. The WebSphere MQ server on machine B accepts the message from the server on machine A and places it in the destination queue on machine B. 4. A WebSphere MQ client on machine B requests the message at the head of the queue.
The MQSEND function is invoked once because SYSIBM.SYSDUMMY1 has only one row. Because this MQSEND function uses two-phase commit, the COMMIT statement ensures that the message is added to the queue. When you use single-phase commit, you do not need to use a COMMIT statement. For example:
SELECT DB2MQ1N.MQSEND (Testing msg) FROM SYSIBM.SYSDUMMY1;
960
| | | | # # # | | | | | | | | | | # # # # # | | # # | | | | | | # # | | | | | | # # | | |
Example: Assume that you have an EMPLOYEE table, with VARCHAR columns LASTNAME, FIRSTNAME, and DEPARTMENT. To send a message that contains this information for each employee in DEPARTMENT 5LGA, issue the following SQL SELECT statement:
SELECT DB2MQ2N.MQSEND (LASTNAME || || FIRSTNAME || || DEPARTMENT) FROM EMPLOYEE WHERE DEPARTMENT = 5lGA; COMMIT;
Message content can be any combination of SQL statements, expressions, functions, and user-specified data. Because this MQSEND function uses two-phase commit, the COMMIT statement ensures that the message is added to the MQ queue.
Retrieving messages
The DB2 MQ functions allow messages to be either read or received. The difference between reading and receiving is that reading returns the message at the head of a queue without removing it from the queue, whereas receiving causes the message to be removed from the queue. A message that is retrieved using a receive operation can be retrieved only once, whereas a message that is retrieved using a read operation allows the same message to be retrieved many times. The following examples use the DB2MQ2N schema for two-phase commit, with the default service DB2.DEFAULT.SERVICE and the default policy DB2.DEFAULT.POLICY. For more information about two-phase commit, see Commit environment for AMI-based DB2 MQ functions and stored procedures on page 948. Example: The following SQL SELECT statement reads the message at the head of the queue that is specified by the default service and policy:
SELECT DB2MQ2N.MQREAD() FROM SYSIBM.SYSDUMMY1;
The MQREAD function is invoked once because SYSIBM.SYSDUMMY1 has only one row. The SELECT statement returns a VARCHAR(4000) string. If no messages are available to be read, a null value is returned. Because MQREAD does not change the queue, you do not need to use a COMMIT statement. Example: The following SQL SELECT statement causes the contents of a queue to be materialized as a DB2 table:
SELECT T.* FROM TABLE(DB2MQ2N.MQREADALL()) T;
The result table T of the table function consists of all the messages in the queue, which is defined by the default service, and the metadata about those messages. The first column of the materialized result table is the message itself, and the remaining columns contain the metadata. The SELECT statement returns both the messages and the metadata. To return only the messages, issue the following statement:
SELECT T.MSG FROM TABLE(DB2MQ2N.MQREADALL()) T;
The result table T of the table function consists of all the messages in the queue, which is defined by the default service, and the metadata about those messages. This SELECT statement returns only the messages.
961
| | # # # | | | | | | | | | # # # # | | | | | | | | | | | | | | | | | | | | | | | | # # # | | |
Example: The following SQL SELECT statement receives (removes) the message at the head of the queue:
SELECT DB2MQ2N.MQRECEIVE() FROM SYSIBM.SYSDUMMY1; COMMIT;
The MQRECEIVE function is invoked once because SYSIBM.SYSDUMMY1 has only one row. The SELECT statement returns a VARCHAR(4000) string. Because this MQRECEIVE function uses two-phase commit, the COMMIT statement ensures that the message is removed from the queue. If no messages are available to be retrieved, a null value is returned, and the queue does not change. Example: Assume that you have a MESSAGES table with a single VARCHAR(2000) column. The following SQL INSERT statement inserts all of the messages from the default service queue into the MESSAGES table in your DB2 database:
INSERT INTO MESSAGES SELECT T.MSG FROM TABLE(DB2MQ2N.MQRECEIVEALL()) T; COMMIT;
The result table T of the table function consists of all the messages in the default service queue and the metadata about those messages. The SELECT statement returns only the messages. The INSERT statement stores the messages into a table in your database.
Application-to-application connectivity
Application-to-application connectivity is typically used to solve the problem of putting together a diverse set of application subsystems. To facilitate application integration, WebSphere MQ provides the means to interconnect applications. This section describes two common scenarios: v Request-and-reply communication method v Publish-and-subscribe method Request-and-reply communication method: The request-and-reply method enables one application to request the services of another application. One way to do this is for the requester to send a message to the service provider to request that some work be performed. When the work has been completed, the provider might decide to send results, or just a confirmation of completion, back to the requester. Unless the requester waits for a reply before continuing, WebSphere MQ must provide a way to associate the reply with its request. WebSphere MQ provides a correlation identifier to correlate messages in an exchange between a requester and a provider. The requester marks a message with a known correlation identifier. The provider marks its reply with the same correlation identifier. To retrieve the associated reply, the requester provides that correlation identifier when receiving messages from the queue. The first message with a matching correlation identifier is returned to the requester. The following examples use the DB2MQ1N schema for single-phase commit. For more information about single-phase commit, see Commit environment for AMI-based DB2 MQ functions and stored procedures on page 948. Example: The following SQL SELECT statement sends a message consisting of the string Msg with corr id to the service MYSERVICE, using the policy MYPOLICY with correlation identifier CORRID1:
962
# # | | | | | | # # | | | | | | | | | | | | | | | | | | | # # | | | | | | | | | | | | | | | |
SELECT DB2MQ1N.MQSEND (MYSERVICE, MYPOLICY, Msg with corr id, CORRID1) FROM SYSIBM.SYSDUMMY1;
The MQSEND function is invoked once because SYSIBM.SYSDUMMY1 has only one row. Because this MQSEND uses single-phase commit, WebSphere MQ adds the message to the queue, and you do not need to use a COMMIT statement. Example: The following SQL SELECT statement receives the first message that matches the identifier CORRID1 from the queue that is specified by the service MYSERVICE, using the policy MYPOLICY:
SELECT DB2MQ1N.MQRECEIVE (MYSERVICE, MYPOLICY, CORRID1) FROM SYSIBM.SYSDUMMY1;
The SELECT statement returns a VARCHAR(4000) string. If no messages are available with this correlation identifier, a null value is returned, and the queue does not change. Publish-and-subscribe method: Another common method of application integration is for one application to notify other applications about events of interest. An application can do this by sending a message to a queue that is monitored by other applications. The message can contain a user-defined string or can be composed from database columns. Simple data publication: In many cases, only a simple message needs to be sent using the MQSEND function. When a message needs to be sent to multiple recipients concurrently, the distribution list facility of the MQSeries AMI can be used. You define distribution lists by using the AMI administration tool. A distribution list comprises a list of individual services. A message that is sent to a distribution list is forwarded to every service defined within the list. Publishing messages to a distribution list is especially useful when there are multiple services that are interested in every message. Example: The following example shows how to send a message to the distribution list InterestedParties:
SELECT DB2MQ2N.MQSEND (InterestedParties,Information of general interest) FROM SYSIBM.SYSDUMMY1;
When you require more control over the messages that a particular service should receive, you can use the MQPUBLISH function, in conjunction with the WebSphere MQSeries Integrator facility. This facility provides a publish-and-subscribe system, which provides a scalable, secure environment in which many subscribers can register to receive messages from multiple publishers. Subscribers are defined by queues, which are represented by service names. MQPUBLISH allows you to specify a list of topics that are associated with a message. Topics allow subscribers to more clearly specify the messages they receive. The following sequence illustrates how the publish-and-subscribe capabilities are used: 1. An MQSeries administrator configures the publish-and-subscribe capability of the WebSphere MQSeries Integrator facility. 2. Interested applications subscribe to subscriber services that are defined in the WebSphere MQSeries Integrator configuration. Each subscriber selects relevant topics and can also use the content-based subscription techniques that are provided by Version 2 of the WebSphere MQSeries Integrator facility.
Chapter 33. WebSphere MQ with DB2
963
| | | | | | | | | | | | | | | # # # # | | | # # # # | | | | | | | | | | | | # # | | | | | | | |
3. A DB2 application publishes a message to a specified publisher service. The message indicates the topic it concerns. 4. The MQSeries functions provided by DB2 UDB for z/OS handle the mechanics of publishing the message. The message is sent to the WebSphere MQSeries Integrator facility by using the specified service policy. 5. The WebSphere MQSeries Integrator facility accepts the message from the specified service, performs any processing defined by the WebSphere MQSeries Integrator configuration, and determines which subscriptions the message satisfies. It then forwards the message to the subscriber queues that match the subscriber service and topic of the message. 6. Applications that subscribe to the specific service, and register an interest in the specific topic, will receive the message in their receiving service. Example: To publish the last name, first name, department, and age of employees who are in department 5LGA, using all the defaults and a topic of EMP, you can use the following statement:
SELECT DB2MQ2N.MQPUBLISH (LASTNAME || || FIRSTNAME || || DEPARTMENT || || char(AGE), 'EMP') FROM DSN8810.EMP WHERE DEPARTMENT = 5LGA;
Example: The following statement publishes messages that contain only the last name of employees who are in department 5LGA to the HR_INFO_PUB publisher service using the SPECIAL_POLICY service policy:
SELECT DB2MQ2N.MQPUBLISH (HR_INFO_PUB, SPECIAL_POLICY, LASTNAME, ALL_EMP:5LGA, MANAGER) FROM DSN8810.EMP WHERE DEPARTMENT = 5LGA;
The messages indicate that the sender has the MANAGER correlation id. The topic string demonstrates that multiple topics, concatenated using a : (a colon) can be specified. In this example, the use of two topics allows subscribers of both the ALL_EMP and the 5LGA topics to receive these messages. To receive published messages, you must first register your applications interest in messages of a given topic and indicate the name of the subscriber service to which messages are sent. An AMI subscriber service defines a broker service and a receiver service. The broker service is how the subscriber communicates with the publish-and-subscribe broker. The receiver service is the location where messages that match the subscription request are sent. Example: The following statement subscribes to the topic ALL_EMP and indicates that messages be sent to the subscriber service, aSubscriber:
SELECT DB2MQ2N.MQSUBSCRIBE (aSubscriber,ALL_EMP) FROM SYSIBM.SYSDUMMY1;
When an application is subscribed, messages published with the topic, ALL_EMP, are forwarded to the receiver service that is defined by the subscriber service. An application can have multiple concurrent subscriptions. Messages that match the subscription topic can be retrieved by using any of the standard message retrieval functions. Example: The following statement non-destructively reads the first message, where the subscriber service, aSubscriber, defines the receiver service as aSubscriberReceiver:
964
# # | | | | | # # | | # # # | | | | | | | # # | | | | | | | | | | | | | # # # # # | | | |
To display both the messages and the topics with which they are published, you can use one of the table functions. Example: The following statement receives the first five messages from aSubscriberReceiver and display both the message and the topic for each of the five messages:
SELECT t.msg, t.topic FROM table (DB2MQ2N.MQRECEIVEALL (aSubscriberReceiver,5)) t;
Example: To read all of the messages with the topic ALL_EMP, issue the following statement:
SELECT t.msg FROM table (DB2MQ2N.MQREADALL (aSubscriberReceiver)) t WHERE t.topic = ALL_EMP;
Note: If you use MQRECEIVEALL with a constraint, your application receives the entire queue, not just those messages that are published with the topic ALL_EMP. This is because the table function is performed before the constraint is applied. When you are no longer interested in having your application subscribe to a particular topic, you must explicitly unsubscribe. Example: The following statement unsubscribes from the ALL_EMP topic of the aSubscriber subscriber service:
SELECT DB2MQ2N.MQUNSUBSCRIBE (aSubscriber, ALL_EMP) FROM SYSIBM.SYSDUMMY1;
After you issue the preceding statement, the publish-and-subscribe broker no longer delivers messages that match the ALL_EMP topic to the aSubscriber subscriber service. Automated Publication: Another important method in application message publishing is automated publication. Using the trigger facility within DB2 UDB for z/OS, you can automatically publish messages as part of a trigger invocation. Although other techniques exist for automated message publication, the trigger-based approach allows you more freedom in constructing the message content and more flexibility in defining the actions of a trigger. As with the use of any trigger, you must be aware of the frequency and cost of execution. Example: The following example shows how you can use the MQSeries functions of DB2 UDB for z/OS with a trigger to publish a message each time a new employee is hired:
CREATE TRIGGER new_employee AFTER INSERT ON DSN8810.EMP REFERENCING NEW AS n FOR EACH ROW MODE DB2SQL SELECT DB2MQ2N.MQPUBLISH (HR_INFO_PUB, current date || || LASTNAME || || DEPARTMENT, NEW_EMP);
Any users or applications that subscribe to the HR_INFO_PUB service with a registered interest in the NEW_EMP topic will receive a message that contains the date, the name, and the department of each new employee when rows are inserted into the DSN8810.EMP table.
965
| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
966
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
v An asynchronous listener can respond to a message from a supplied client, or from a user-defined application. The number of environments that can act as a database client is greatly expanded. Clients such as factory automation equipment, pervasive devices, or embedded controllers can communicate with DB2 Universal Database either directly through WebSphere MQ or through some gateway that supports WebSphere MQ.
The data type for inMsgType and the data type for outMsgType can be VARCHAR, CLOB, or BLOB of any length and are determined at startup. The input data type and output data type can be different data types. If an incoming message is a
Chapter 33. WebSphere MQ with DB2
967
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
request and has a specified reply-to queue, the message in outMsg will be sent to the specified queue. The incoming message can be one of the following message types: v Datagram v Datagram with report requested v Request message with reply v Request message with reply and report requested
Configuring and running MQListener in DB2 UDB for OS/390 and z/OS
Use the following procedure to configure the environment for MQListener and to develop a simple application that receives a message, inserts the message in a table, and creates a simple response message: 1. 2. 3. 4. 5. Configure MQListener to run in the DB2 environment. Configure WebSphere MQ for MQListener. Configure MQListener task. Create the sample stored procedure to work with MQListener. Run a simple MQListener application.
968
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
of the MQListener commands are run. This trace file will be used by IBM software support for debugging if the customer reports any problem. Unless requested, this variable should not be defined. MQLSNLOG The log file contains diagnostic information about the major events. This ENV variable is set to the name of the file where all log information will be written. Alternatively, it can be configured to use the syslogd daemon interface for writing log records. All instances of MQListener daemon running one or more tasks will share the same file. For monitoring MQListener daemon, this variable should always be set. When MQListener daemon is running, open the log/trace files only in read mode (use cat/more/tail commands in z/OS UNIX System Services to open the files) because they are used by the daemon process for writing. Follow the instructions in the README file that is created in the MQListener installation path. Configuration table : SYSMQL.LISTENERS: Table 166 describes each of the columns of the configuration table:
Table 166. Description of Columns of SYSMQL.LISTENERS Column name CONFIGURATIONNAME
1
Description The configuration name. The configuration name allows you to group several tasks into the same configuration. A single instance of MQListener can run all of the tasks that are defined within a configuration name. The name of the Websphere MQ subsystem that contains the queues that are to be used. The name of the queue in the Websphere MQ subsystem that is to be monitored for incoming messages. The combination of the input queue and the queue manager are unique within a configuration Currently unused The schema name of the stored procedure that will be called by MQListener The name of the stored procedure that will be called by MQListener Currently unused The number of duplicate instances of a single task that are to run in this configuration The time MQListener waits (in milliseconds) after processing the current message before it looks for the next message Currently unused
QUEUEMANAGER1 INPUTQUEUE1
969
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
manager and some local queues. Configure these entities for use in such instances as transaction management, deadletter queue, backout queue, and backout retry threshold. To configure WebSphere MQ for a simple MQListener application, do the following: 1. Create MQSeries QueueManager: Define the MQSeries subsystem to z/OS and then issue the following command from a z/OS console to start the queue manager:
<command-prefix-string> START QMGR
command-prefix-string is the command prefix for the MQSeries subsystem. 2. Create Queues under MQSeries QueueManager: In a simple MQListener application, you typically use the following WebSphere MQ queues: Deadletter queue The deadletter queue in WebSphere MQ holds messages that cannot be processed. MQListener uses this queue to hold replies that cannot be delivered, for example, because the queue to which the replies should be sent is full. A deadletter queue is useful in any MQ installation especially for recovering messages that are not sent. Backout queue For MQListener tasks that use two-phase commit, the backout queue serves a similar purpose as the deadletter queue. MQListener places the original request in the backout queue after the request is rolled back a specified number of times (called the backout threshold). Administration queue The administration queue is used for routing control messages, such as shutdown and restart, to MQListener. If you do not supply an administration queue, the only way to shut down MQListener is to issue a kill command. Application input and output queues The application uses input queues and output queues. The application receives messages from the input queue and sends replies and exceptions to the output queue. Create your local queues by using CSQUTIL utility or by using MQSeries operations and control panels from ISPF (csqorexx). The following is an example of the JCL that is used to create your local queues. In this example, MQND is the name of the queue manager:
//* //* ADMIN_Q : Admin queue //* BACKOUT_Q : Backout queue //* IN_Q : Input queue having a backout queue with threshold=3 //* REPLY_Q : output queue or reply queue //* DEADLLETTER_Q: Dead letter queue //* //DSNTECU EXEC PGM=CSQUTIL,PARM=MQND //STEPLIB DD DSN=MQS.SCSQANLE,DISP=SHR // DD DSN=MQS.SCSQAUTH,DISP=SHR //SYSPRINT DD SYSOUT=* //SYSIN DD * COMMAND DDNAME(CREATEQ) /*
970
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
//CREATEQ DD * DEFINE QLOCAL(ADMIN_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) DEFINE QLOCAL(BACKOUT_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) DEFINE QLOCAL(REPLY_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) DEFINE QLOCAL(IN_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) + BOQNAME(BACKOUT_Q) + BOTHRESH(3) DEFINE QLOCAL(DEADLETTER_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) ALTER QMGR DEADQ (DEADLETTER_Q) REPLACE /*
971
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
-inputQueue <inputqueue name> -procName <stored-procedure name> -procSchema <stored-procedure schema name> -numInstances <number of instances>
To display information about all the configurations, issue the following command:
db2mqln1/db2mqln2 show -ssID <subsystem name> -config all
v To get help with the command and the valid parameters, issue the following command:
db2mqln1/db2mqln2 help
v To get help for a particular parameter, issue the following command, where command is a specific parameter:
db2mqln1/db2mqln2 help <command>
Restrictions: v Use the same queue manager for the request queue and the reply queue. v MQListener does not support logical messages that are composed of multiple physical messages. MQListener processes physical messages independently.
972
# # # # #
v prefixes the message with an MQ dead letter header (MQDLH) structure v sets the reason field in the MQDLH structure to the appropriate reason code v sends the message to the deadletter queue The following table describes the reason codes that the MQListener daemon returns.
Explanation The call to a stored procedure was successful but an error occurred during the DB2 commit process and either of the following conditions were true: v No exception report was requested.1 v An exception report was requested, but could not be delivered. This reason code applies only to one-phase commit environments. The call to the specified stored procedure failed and the disposition of the MQ message is that an exception report be generated and the original message be sent the deadletter queue. All of the following conditions occurred: v The disposition of the MQ message is that an exception report is not to be generated.
1
# Table 167. Reason codes that MQListener returns # Reason code # 900 # # # # # 901 # # # 902 # # # # #
v The stored procedure was called unsuccessfully the number of times that is specified as the backout threshold. v The name of the backout queue is the same as the deadletter queue. This reason code applies only to two-phase commit environments.
# MQRC_TRUNCATED_ The size of the MQ message is greater than the input parameter of the stored procedure that is to be invoked. In one-phase commit environments, this oversized message is sent to # MSG__FAILED the dead letter queue. In two-phase commit environments, this oversized message is sent to # the deadletter queue only when the message cannot be delivered to the backout queue. # # # # # # # # # # # # # # # # # # # # # Notes: 1. To specify that the receiver application generate exception reports if errors occur, set the report field in the MQMD structure that was used when sending the message to one of the following values: v MQRO_EXCEPTION v MQRO_EXCEPTION_WITH_DATA v MQRO_EXCEPTION_WITH_FULL_DATA For more information about the report field, see the WebSphere MQ Information Center at http://publib.boulder.ibm.com/infocenter/wmqv6/ v6r0/index.jsp.
973
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
The table contains a check constraint so that messages that start with the characters FAIL cannot be inserted into the table. The check constraint is used to demonstrate the behavior of MQListener when the stored procedure fails. 2. Create the following SQL stored procedure and define it to the same DB2 subsystem:
CREATE PROCEDURE TEST.APROC ( IN PIN VARCHAR(25), OUT POUT VARCHAR(2)) LANGUAGE SQL FENCED NOT DETERMINISTIC NO DBINFO COLLID TESTLSRN WLM ENVIRONMENT TESTWLMX ASUTIME NO LIMIT STAY RESIDENT NO PROGRAM TYPE MAIN SECURITY USER PROCEDURE1: BEGIN INSERT INTO PROCTABLE VALUES(PIN); SET POUT = OK; END PROCEDURE1
TESTLSRN is the name of the collection that is used for this stored procedure and TESTWLMX is the name of the WLM environment where this stored procedure will run. 3. Bind the collection TESTLSRN to the plan DB2MQLSN, which is used by MQListener:
BIND PLAN(DB2MQLSN) + PKLIST(LSNR.*,TESTLSRN.*) + ACTION(REP) DISCONNECT(EXPLICIT);
MQListener examples
The following examples show a simple MQListener application. The application receives a message, inserts the message into a table, and generates a simple response message. To simulate a processing failure, the application includes a check constraint on the table that contains the message. The constraint prevents any string that begins with the characters fail from being inserted into the table. If you attempt to insert a message that violates the check constraint, the example application returns an error message and re-queues the failing message to the backout queue. In this example, the following assumptions are made: v MQListener is installed and configured for subsystem DB7A. v MQND is the name of MQSeries subsystem that is defined. The Queue Manager is running, and the following local queues are defined in the DB7A subsystem: ADMIN_Q : Admin queue BACKOUT_Q : Backout queue IN_Q : Input queue that has a backout queue withthreshold = 3 REPLY_Q : Output queue or Reply queue DEADLLETTER_Q : Dead letter queue v The person who is running the MQListener daemon has execute permission on the DB2MQLSN plan. Before you run the MQListener daemon, add the following configuration, named ACFG, to the configuration table by issuing the following command:
974
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
db2mqln2 add -ssID DB7A -config ACFG -queueManager MQND -inputQueue IN_Q -procName APROC -procSchema TEST
Run the MQListener daemon for two-phase commit for the added configuration ACFG. To run MQListener with all of the tasks specified in a configuration, issue the following command:
db2mqln2 run -ssID DB7A -config ACFG -adminQueue ADMIN_Q -adminQMgr MQND
The following examples show how to use MQListener to send a simple message and then inspect the results of the message in the WebSphere MQ queue manager and the database. The examples include queries to determine if the input queue contains a message or to determine if a record is placed in the table by the stored procedure. MQListener example 1: Running a simple application: 1. Start with a clean database table by issuing the following SQL statement:
delete from PROCTABLE
2. Send a datagram to the input queue, IN_Q, with the message as sample message. Refer to Websphere MQ sample CSQ4BCK1 to send a message to the queue. Specify the MsgType option for Message Descriptor as MQMT_DATAGRAM. 3. Query the table by using the following statement to verify that the sample message is inserted:
select * from PROCTABLE
4. Display the number of messages that remain on the input queue to verify that the message has been removed. Issue the following command from a z/OS console:
/-MQND display queue(In_Q) curdepth
MQListener example 2: Sending requests to the input queue and inspecting the reply: 1. Start with a clean database table by issuing the following SQL statement:
delete from PROCTABLE
2. Send a request to the input queue, IN_Q, with the message as another sample message. Refer to Websphere MQ sample CSQ4BCK1 to send a message to the queue. Specify the MsgType option for Message Descriptor as MQMT_REQUEST and the queue name for ReplytoQ option. 3. Query the table by using the following statement to verify that the sample message is inserted:
select * from PROCTABLE
4. Display the number of messages that remain on the input queue to verify that the message has been removed. Issue the following command from a z/OS console:
/-MQND display queue(In_Q) curdepth
975
# # # # # # # # # # # # # # # # # # # # # # # # #
5. Look at the ReplytoQ name that you specified when you sent the request message for the reply by using the WebSphere MQ sample program CSQ4BCJ1. Verify that the string OK is generated by the stored procedure. MQListener example 3: Testing an unsuccessful insert operation: If you send a message that starts with the string fail, the constraint in the table definition is violated, and the stored procedure fails. 1. Start with a clean database table by issuing the following SQL statement:
delete from PROCTABLE
2. Send a request to the input queue, IN_Q, with the message as failing sample message. Refer to Websphere MQ sample CSQ4BCK1 to send a message to the queue. Specify the MsgType option for Message Descriptor as MQMT_REQUEST and the queue name for ReplytoQ option. 3. Query the table by using the following statement to verify that the sample message is not inserted:
select * from PROCTABLE
4. Display the number of messages that remain on the input queue to verify that the message has been removed. Issue the following command from a z/OS console:
/-MQND display queue(In_Q) curdepth
5. Look at the Backout queue and find the original message by using the WebSphere MQ sample program CSQ4BCJ1. Note: In this example, if a request message with added options for exception report is sent (the Report option is specified for Message Descriptor), an exception report is sent to reply queue and original message is sent to the deadletter queue.
976
977
<SOAP-ENV:Envelope xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/ xmlns:SOAP-ENC=http://schemas.xmlsoap.org/soap/encoding/ xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xmlns:xsd=http://www.w3.org/2001/XMLSchema > <SOAP-ENV:Body> <ns:getTemp xmlns:ns="urn:xmethods-Temperature"> <city>Barcelona</city> </ns:getTemp> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
Example: The following example is the result of the preceding example. This example shows the HTTP response header with the SOAP response envelope. The result shows that the temperature is 85 degrees Fahrenheit in Barcelona.
HTTP/1.1 200 OK Date: Wed, 31 Jul 2002 22:06:41 GMT Server: Enhydra-MultiServer/3.5.2 Status: 200 Content-Type: text/xml; charset=utf-8 Servlet-Engine: Lutris Enhydra Application Server/3.5.2 (JSP 1.1; Servlet 2.2; Java 1.3.1_04; Linux 2.4.7-10smp i386; java.vendor=Sun Microsystems Inc.) Content-Length: 467 Set-Cookie:JSESSIONID=JLEcR34rBc2GTIkn-0F51ZDk;Path=/soap X-Cache: MISS from www.xmethods.net Keep-Alive: timeout=15, max=10 Connection: Keep-Alive <?xml version=1.0 encoding=UTF-8?> <SOAP-ENV:Envelope xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/ xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xmlns:xsd=http://www.w3.org/2001/XMLSchema > <SOAP-ENV:Body> <ns1:getTempResponse xmlns:ns1="urn:xmethods-Temperature" SOAP-ENV:encodingStyle=http://schemas.xmlsoap.org/soap/encoding/ > <return xsi:type="xsd:float">85</return> </ns1:getTempResponse> </SOAP-ENV:Body></SOAP-ENV:Envelope>
Example: The following example shows how to insert the result from a web service into a table
INSERT INTO MYTABLE(XMLCOL) VALUES (DB2XML.SOAPHTTPC( http://www.myserver.com/services/db2sample/list.dadx/SOAP, http://tempuri.org/db2sample/list.dadx <listDepartments xmlns="http://tempuri.org/db2sample/listdadx"> <deptno>A00</deptno> </ListDepartments>))
# # # # # # # # # # #
978
# # # # # # # # # # # # # # # # # # # # #
SOAPHTTPNV returns VARCHAR(32672) data and SOAPHTTPNC returns CLOB(1M) data. Both functions accept either VARCHAR(32672) or CLOB(1M) as the input body. Example: The following example shows how to insert the complete result from a web service into a table using SOAPHTTPNC.
INSERT INTO EMPLOYEE(XMLCOL) VALUES (DB2XML.SOAPHTTPNC( http://www.myserver.com/services/db2sample/list.dadx/SOAP, http://tempuri.org/db2sample/list.dadx, <?xml version="1.0" encoding="UTF-8" ?> || <SOAP-ENV:Envelope || xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" || xmlns:xsd="http://www.w3.org/2001/XMLSchema" || xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> || <SOAP-ENV:Body> || <listDepartments xmlns="http://tempuri.org/db2sample/list.dadx"> <deptNo>A00</deptNo> </listDepartments> || </SOAP-ENV:Body> || </SOAP-ENV:Envelope>))
979
Table 168. SQLSTATE values that DB2 returns for error conditions related to using DB2 as a web services consumer (continued) SQLSTATE 38321 38322 38323 38324 38325 38327 38328 Description The function was unable to send the entire request to the specified server. An error occurred while attempting to read the result data from the specified server. An error occurred while waiting for data to be returned from the specified server. The function encountered an internal error while attempting to format the input message. The function encountered an internal error while attempting to add namespace information to the input message. The XML parser could not strip the SOAP envelope from the result message. An error occurred while processing an SSL connection. An unexpected NULL value was specified for the endpoint, action, or SOAP input. A dynamic memory allocation error. An unknown or unsupported transport protocol. An invalid URL was specified. An error occurred while resolving the hostname. A memory exception for socket. An error occurred during socket connect. An error occurred while setting socket options. An error occurred during input/output control (ioctl) to verify HTTPS enablement. An error occurred while reading from the socket. An error occurred due to socket timeout. No response from the specified host. An error occurred due to an unexpected HTTP return or content type The TCP/IP stack was not enabled for HTTPS.
# # # # # # # # # # # # # # # #
38350 38351 38352 38353 38354 38355 38356 38357 38358 38359 38360 38361 38362 38363
980
For more information about using DB2 as a web services provider, see DB2 Information Integrator Application Developers Guide.
981
982
983
/**********************************************************/ /* Declare scrollable cursor to retrieve department names */ /**********************************************************/ EXEC SQL DECLARE C1 SCROLL CURSOR FOR . SELECT DEPTNAME FROM DSN8810.DEPT; . . /**********************************************************/ /* Open the cursor and position it after the end of the */ /* result table. */ /**********************************************************/ EXEC SQL OPEN C1; EXEC SQL FETCH AFTER FROM C1; /**********************************************************/ /* Fetch rows backward until all rows are fetched. */ /**********************************************************/ while(SQLCODE==0) { EXEC SQL FETCH PRIOR FROM C1 INTO :hv_deptname; . . . } EXEC SQL CLOSE C1;
984
EXEC SQL OPEN C1; i=0; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :hv_deptname, :hv_dept_rowid; rowid_array[i].length=hv_dept_rowid.length; for(j=0;j<hv_dept_rowid.length;j++) rowid_array[i].data[j]=hv_dept_rowid.data[j]; i++; } EXEC SQL CLOSE C1; n=i-1; /* Get the number of array elements */ /**********************************************************/ /* Use the ROWID values to retrieve the department names */ /* in reverse order. */ /**********************************************************/ for(i=n;i>=0;i--) { hv_dept_rowid.length=rowid_array[i].length; for(j=0;j<hv_dept_rowid.length;j++) hv_dept_rowid.data[j]=rowid_array[i].data[j]; EXEC SQL SELECT DEPTNAME INTO :hv_deptname FROM DSN8810.DEPT WHERE DEPTROWID=:hv_dept_rowid; }
985
tenth_row=hv_deptname; /**********************************************************/ /* Fetch backward 5 rows */ /**********************************************************/ for(i=0;i<5;i++) { EXEC SQL FETCH PRIOR FROM C1 INTO :hv_deptname; } /**********************************************************/ /* Save the value in the fifth row */ /**********************************************************/ fifth_row=hv_deptname; /**********************************************************/ /* Fetch forward 3 rows */ /**********************************************************/ for(i=0;i<3;i++) { EXEC SQL FETCH NEXT FROM C1 INTO :hv_deptname; } /**********************************************************/ /* Save the value in the eighth row */ /**********************************************************/ eighth_row=hv_deptname; /**********************************************************/ /* Close the cursor */ /**********************************************************/ EXEC SQL CLOSE C1;
986
Answer: Yes. When updating large volumes of data using a cursor, you can minimize the amount of time that you hold locks on the data by declaring the cursor with the HOLD option and by issuing commits frequently.
Using SELECT *
Question: What are the implications of using SELECT * ? Answer: Generally, you should select only the columns you need because DB2 is sensitive to the number of columns selected. Use SELECT * only when you are sure you want to select all columns. One alternative is to use views defined with only the necessary columns, and use SELECT * to access the views. Avoid SELECT * if all the selected columns participate in a sort operation (SELECT DISTINCT and SELECT...UNION, for example).
Even with the ORDER BY clause, DB2 might fetch all the data first and sort it afterwards, which could be wasteful. Instead, you can write the query in one of the following ways:
SELECT * FROM table WHERE key >= value ORDER BY key ASC OPTIMIZE FOR 1 ROW SELECT * FROM table WHERE key >= value ORDER BY key ASC FETCH FIRST n ROWS ONLY
987
Use OPTIMIZE FOR 1 ROW to influence the access path. OPTIMIZE FOR 1 ROW tells DB2 to select an access path that returns the first qualifying row quickly. Use FETCH FIRST n ROWS ONLY to limit the number of rows in the result table to n rows. FETCH FIRST n ROWS ONLY has the following benefits: v When you use FETCH statements to retrieve data from a result table, FETCH FIRST n ROWS ONLY causes DB2 to retrieve only the number of rows that you need. This can have performance benefits, especially in distributed applications. If you try to execute a FETCH statement to retrieve the n+1st row, DB2 returns a +100 SQLCODE. v When you use FETCH FIRST ROW ONLY in a SELECT INTO statement, you never retrieve more than one row. Using FETCH FIRST ROW ONLY in a SELECT INTO statement can prevent SQL errors that are caused by inadvertently selecting more than one value into a host variable. When you specify FETCH FIRST n ROWS ONLY but not OPTIMIZE FOR n ROWS, OPTIMIZE FOR n ROWS is implied. When you specify FETCH FIRST n ROWS ONLY and OPTIMIZE FOR m ROWS, and m is less than n, DB2 optimizes the query for m rows. If m is greater than n, DB2 optimizes the query for n rows.
988
Answer: Your program can dynamically execute CREATE TABLE and ALTER TABLE statements entered by users to create new tables, add columns to existing tables, or increase the length of VARCHAR columns. Added columns initially contain either the null value or a default value. Both statements, like any data definition statement, are relatively expensive to execute; consider the effects of locks. You cannot rearrange or delete columns in a table without dropping the entire table. You can, however, create a view on the table, which includes only the columns you want, in the order you want. This has the same effect as redefining the table. For a description of dynamic SQL execution, see Chapter 24, Coding dynamic SQL in application programs, on page 593.
989
990
Part 7. Appendixes
991
992
993
Relationship to other tables: The activity table is a parent table of the project activity table, through a foreign key on column ACTNO.
Because the table is self-referencing, and also is part of a cycle of dependencies, its foreign keys must be added later with these statements:
ALTER TABLE DSN8810.DEPT FOREIGN KEY RDD (ADMRDEPT) REFERENCES DSN8810.DEPT ON DELETE CASCADE; ALTER TABLE DSN8810.DEPT FOREIGN KEY RDE (MGRNO) REFERENCES DSN8810.EMP ON DELETE SET NULL;
Content of the department table: Table 171 shows the content of the columns.
Table 171. Columns of the department table Column 1 Column Name DEPTNO Description Department ID, the primary key
994
Table 171. Columns of the department table (continued) Column 2 3 4 Column Name DEPTNAME MGRNO ADMRDEPT Description A name describing the general activities of the department Employee number (EMPNO) of the department manager ID of the department to which this department reports; the department at the highest level reports to itself The remote location name
LOCATION
The LOCATION column contains nulls until sample job DSNTEJ6 updates this column with the location name. Relationship to other tables: The table is self-referencing: the value of the administering department must be a department ID. The table is a parent table of:
Appendix A. DB2 sample tables
995
v The employee table, through a foreign key on column WORKDEPT v The project table, through a foreign key on column DEPTNO. It is a dependent of the employee table, through its foreign key on column MGRNO.
Content of the employee table: Table 174 shows the content of the columns. The table has a check constraint, NUMBER, which checks that the phone number is in the numeric range 0000 to 9999.
Table 174. Columns of the employee table Column 1 2 3 4 5 6 7 8 9 Column Name EMPNO FIRSTNME MIDINIT LASTNAME WORKDEPT PHONENO HIREDATE JOB EDLEVEL Description Employee number (the primary key) First name of employee Middle initial of employee Last name of employee ID of department in which the employee works Employee telephone number Date of hire Job held by the employee Number of years of formal education
996
Table 174. Columns of the employee table (continued) Column 10 11 12 13 14 Column Name SEX BIRTHDATE SALARY BONUS COMM Description Sex of the employee (M or F) Date of birth Yearly salary in dollars Yearly bonus in dollars Yearly commission in dollars
Table 176 and Table 177 on page 998 show the content of the employee table:
Table 176. Left half of DSN8810.EMP: employee table. Note that a blank in the MIDINIT column is an actual value of rather than null.
EMPNO 000010 000020 000030 000050 000060 000070 000090 000100 000110 000120 000130 000140 000150 000160 000170 000180 000190 000200 000210 000220 000230 000240 000250 000260 000270 000280 000290 000300 000310 000320 000330 000340 FIRSTNME CHRISTINE MICHAEL SALLY JOHN IRVING EVA EILEEN THEODORE VINCENZO SEAN DOLORES HEATHER BRUCE ELIZABETH MASATOSHI MARILYN JAMES DAVID WILLIAM JENNIFER JAMES SALVATORE DANIEL SYBIL MARIA ETHEL JOHN PHILIP MAUDE RAMLAL WING JASON MIDINIT I L A B F D W Q G M A R J S H T K J M S P L R R X F V R LASTNAME HAAS THOMPSON KWAN GEYER STERN PULASKI HENDERSON SPENSER LUCCHESSI OCONNELL QUINTANA NICHOLLS ADAMSON PIANKA YOSHIMURA SCOUTTEN WALKER BROWN JONES LUTZ JEFFERSON MARINO SMITH JOHNSON PEREZ SCHNEIDER PARKER SMITH SETRIGHT MEHTA LEE GOUNOT WORKDEPT A00 B01 C01 E01 D11 D21 E11 E21 A00 A00 C01 C01 D11 D11 D11 D11 D11 D11 D11 D11 D21 D21 D21 D21 D21 E11 E11 E11 E11 E21 E21 E21 PHONENO 3978 3476 4738 6789 6423 7831 5498 0972 3490 2167 4578 1793 4510 3782 2890 1682 2986 4501 0942 0672 2094 3780 0961 8953 9001 8997 4502 2095 3332 9990 2103 5698 HIREDATE 1965-01-01 1973-10-10 1975-04-05 1949-08-17 1973-09-14 1980-09-30 1970-08-15 1980-06-19 1958-05-16 1963-12-05 1971-07-28 1976-12-15 1972-02-12 1977-10-11 1978-09-15 1973-07-07 1974-07-26 1966-03-03 1979-04-11 1968-08-29 1966-11-21 1979-12-05 1969-10-30 1975-09-11 1980-09-30 1967-03-24 1980-05-30 1972-06-19 1964-09-12 1965-07-07 1976-02-23 1947-05-05
997
Table 176. Left half of DSN8810.EMP: employee table (continued). Note that a blank in the MIDINIT column is an actual value of rather than null.
EMPNO 200010 200120 200140 200170 200220 200240 200280 200310 200330 200340 FIRSTNME DIAN GREG KIM KIYOSHI REBA ROBERT EILEEN MICHELLE HELENA ROY MIDINIT J N K M R F R LASTNAME HEMMINGER ORLANDO NATZ YAMAMOTO JOHN MONTEVERDE SCHWARTZ SPRINGER WONG ALONZO WORKDEPT A00 A00 C01 D11 D11 D21 E11 E11 E21 E21 PHONENO 3978 2167 1793 2890 0672 3780 8997 3332 2103 5698 HIREDATE 1965-01-01 1972-05-05 1976-12-15 1978-09-15 1968-08-29 1979-12-05 1967-03-24 1964-09-12 1976-02-23 1947-05-05
998
Relationship to other tables: The table is a parent table of: v The department table, through a foreign key on column MGRNO v The project table, through a foreign key on column RESPEMP. It is a dependent of the department table, through its foreign key on column WORKDEPT.
DB2 requires an auxiliary table for each LOB column in a table. These statements define the auxiliary tables for the three LOB columns in DSN8810.EMP_PHOTO_RESUME:
CREATE AUX TABLE DSN8810.AUX_BMP_PHOTO IN DSN8D81L.DSN8S81M STORES DSN8810.EMP_PHOTO_RESUME COLUMN BMP_PHOTO; CREATE AUX TABLE DSN8810.AUX_PSEG_PHOTO IN DSN8D81L.DSN8S81L STORES DSN8810.EMP_PHOTO_RESUME COLUMN PSEG_PHOTO; CREATE AUX TABLE DSN8810.AUX_EMP_RESUME IN DSN8D81L.DSN8S81N STORES DSN8810.EMP_PHOTO_RESUME COLUMN RESUME;
Content of the employee photo and resume table: Table 178 shows the content of the columns.
Table 178. Columns of the employee photo and resume table Column 1 Column Name EMPNO Description Employee ID (the primary key)
999
Table 178. Columns of the employee photo and resume table (continued) Column 2 3 4 5 Column Name EMP_ROWID PSEG_PHOTO BMP_PHOTO RESUME Description Row ID to uniquely identify each row of the table. DB2 supplies the values of this column. Employee photo, in PSEG format Employee photo, in BMP format Employee resume
Table 179 shows the indexes for the employee photo and resume table:
Table 179. Indexes of the employee photo and resume table Name DSN8810.XEMP_PHOTO_RESUME On Column EMPNO Type of Index Primary, ascending
Table 180 shows the indexes for the auxiliary tables for the employee photo and resume table:
Table 180. Indexes of the auxiliary tables for the employee photo and resume table Name DSN8810.XAUX_BMP_PHOTO DSN8810.XAUX_PSEG_PHOTO DSN8810.XAUX_EMP_RESUME On Table DSN8810.AUX_BMP_PHOTO DSN8810.AUX_PSEG_PHOTO DSN8810.AUX_EMP_RESUME Type of Index Unique Unique Unique
Relationship to other tables: The table is a parent table of the project table, through a foreign key on column RESPEMP.
1000
Because the table is self-referencing, the foreign key for that restraint must be added later with:
ALTER TABLE DSN8810.PROJ FOREIGN KEY RPP (MAJPROJ) REFERENCES DSN8810.PROJ ON DELETE CASCADE;
Content of the project table: Table 181 shows the content of the columns.
Table 181. Columns of the project table Column 1 2 3 4 5 Column Name PROJNO PROJNAME DEPTNO RESPEMP PRSTAFF Description Project ID (the primary key) Project name ID of department responsible for the project ID of employee responsible for the project Estimated mean number of persons needed between PRSTDATE and PRENDATE to achieve the whole project, including any subprojects Estimated project start date Estimated project end date ID of any project of which this project is a part
6 7 8
Relationship to other tables: The table is self-referencing: a nonnull value of MAJPROJ must be a project number. The table is a parent table of the project activity table, through a foreign key on column PROJNO. It is a dependent of: v The department table, through its foreign key on DEPTNO v The employee table, through its foreign key on RESPEMP.
1001
ON DELETE RESTRICT, FOREIGN KEY RPAA (ACTNO) REFERENCES DSN8810.ACT ON DELETE RESTRICT) IN DSN8D81A.DSN8S81P CCSID EBCDIC;
Content of the project activity table: Table 183 shows the content of the columns.
Table 183. Columns of the project activity table Column 1 2 3 4 5 Column Name PROJNO ACTNO ACSTAFF ACSTDATE ACENDATE Description Project ID Activity ID Estimated mean number of employees needed to staff the activity Estimated activity start date Estimated activity completion date
Relationship to other tables: The table is a parent table of the employee to project activity table, through a foreign key on columns PROJNO, ACTNO, and EMSTDATE. It is a dependent of: v The activity table, through its foreign key on column ACTNO v The project table, through its foreign key on column PROJNO
1002
FOREIGN KEY REPAE (EMPNO) REFERENCES DSN8810.EMP ON DELETE RESTRICT) IN DSN8D81A.DSN8S81P CCSID EBCDIC;
Content of the employee to project activity table: Table 185 shows the content of the columns.
Table 185. Columns of the employee to project activity table Column 1 2 3 4 5 6 Column Name EMPNO PROJNO ACTNO EMPTIME EMSTDATE EMENDATE Description Employee ID number Project ID of the project ID of the activity within the project A proportion of the employees full time (between 0.00 and 1.00) to be spent on the activity Date the activity starts Date the activity ends
Table 186 shows the indexes for the employee to project activity table:
Table 186. Indexes of the employee to project activity table Name DSN8810.XEMPPROJACT1 DSN8810.XEMPPROJACT2 On Columns PROJNO, ACTNO, EMSTDATE, EMPNO EMPNO Type of Index Unique, ascending Ascending
Relationship to other tables: The table is a dependent of: v The employee table, through its foreign key on column EMPNO v The project activity table, through its foreign key on columns PROJNO, ACTNO, and EMSTDATE. | | | | | | | | | | | |
1003
| | | | | | | | | | |
This table has no indexes Relationship to other tables: This table has no relationship to other tables.
CASCADE DEPT SET NULL RESTRICT EMP RESTRICT RESTRICT EMP_PHOTO_RESUME RESTRICT CASCADE PROJ RESTRICT PROJACT RESTRICT EMPPROJACT ACT RESTRICT SET NULL
RESTRICT
1004
Table 188. Views on sample tables View name VDEPT VHDEPT VEMP On tables or views Used in application DEPT DEPT EMP Organization Project Distributed organization Distributed organization Organization Project Project Project Project Project Organization Organization
VPROJ VACT VPROJACT VEMPPROJACT VDEPMG1 VEMPDPT1 VASTRDE1 VASTRDE2 VPROJRE1 VPSTRDE1 VPSTRDE2 VFORPLA VSTAFAC1 VSTAFAC2
PROJ ACT PROJACT EMPROJACT DEPT EMP DEPT EMP DEPT VDEPMG1 EMP PROJ EMP VPROJRE1 VPROJRE2 VPROJRE1 VPROJRE1 EMPPROJACT PROJACT ACT EMPPROJACT ACT EMP EMP DEPT EMP
VPHONE VEMPLP
Phone Phone
The following SQL statements are used to create the sample views:
CREATE VIEW DSN8810.VDEPT AS SELECT ALL DEPTNO , DEPTNAME, MGRNO , ADMRDEPT FROM DSN8810.DEPT; Figure 251. VDEPT
1005
CREATE VIEW DSN8810.VHDEPT AS SELECT ALL DEPTNO , DEPTNAME, MGRNO , ADMRDEPT, LOCATION FROM DSN8810.DEPT; Figure 252. VHDEPT
CREATE VIEW DSN8810.VEMP AS SELECT ALL EMPNO , FIRSTNME, MIDINIT , LASTNAME, WORKDEPT FROM DSN8810.EMP; Figure 253. VEMP CREATE VIEW DSN8810.VPROJ AS SELECT ALL PROJNO, PROJNAME, DEPTNO, RESPEMP, PRSTAFF, PRSTDATE, PRENDATE, MAJPROJ FROM DSN8810.PROJ ; Figure 254. VPROJ CREATE VIEW DSN8810.VACT AS SELECT ALL ACTNO , ACTKWD , ACTDESC FROM DSN8810.ACT ; Figure 255. VACT CREATE VIEW DSN8810.VPROJACT AS SELECT ALL PROJNO,ACTNO, ACSTAFF, ACSTDATE, ACENDATE FROM DSN8810.PROJACT ; Figure 256. VPROJACT CREATE VIEW DSN8810.VEMPPROJACT AS SELECT ALL EMPNO, PROJNO, ACTNO, EMPTIME, EMSTDATE, EMENDATE FROM DSN8810.EMPPROJACT ; Figure 257. VEMPPROJACT CREATE VIEW DSN8810.VDEPMG1 (DEPTNO, DEPTNAME, MGRNO, FIRSTNME, MIDINIT, LASTNAME, ADMRDEPT) AS SELECT ALL DEPTNO, DEPTNAME, EMPNO, FIRSTNME, MIDINIT, LASTNAME, ADMRDEPT FROM DSN8810.DEPT LEFT OUTER JOIN DSN8810.EMP ON MGRNO = EMPNO ; Figure 258. VDEPMG1
1006
CREATE VIEW DSN8810.VEMPDPT1 (DEPTNO, DEPTNAME, EMPNO, FRSTINIT, MIDINIT, LASTNAME, WORKDEPT) AS SELECT ALL DEPTNO, DEPTNAME, EMPNO, SUBSTR(FIRSTNME, 1, 1), MIDINIT, LASTNAME, WORKDEPT FROM DSN8810.DEPT RIGHT OUTER JOIN DSN8810.EMP ON WORKDEPT = DEPTNO ; Figure 259. VEMPDPT1
CREATE VIEW DSN8810.VASTRDE1 (DEPT1NO,DEPT1NAM,EMP1NO,EMP1FN,EMP1MI,EMP1LN,TYPE2, DEPT2NO,DEPT2NAM,EMP2NO,EMP2FN,EMP2MI,EMP2LN) AS SELECT ALL D1.DEPTNO,D1.DEPTNAME,D1.MGRNO,D1.FIRSTNME,D1.MIDINIT, D1.LASTNAME, 1, D2.DEPTNO,D2.DEPTNAME,D2.MGRNO,D2.FIRSTNME,D2.MIDINIT, D2.LASTNAME FROM DSN8810.VDEPMG1 D1, DSN8810.VDEPMG1 D2 WHERE D1.DEPTNO = D2.ADMRDEPT ; Figure 260. VASTRDE1 CREATE VIEW DSN8810.VASTRDE2 (DEPT1NO,DEPT1NAM,EMP1NO,EMP1FN,EMP1MI,EMP1LN,TYPE2, DEPT2NO,DEPT2NAM,EMP2NO,EMP2FN,EMP2MI,EMP2LN) AS SELECT ALL D1.DEPTNO,D1.DEPTNAME,D1.MGRNO,D1.FIRSTNME,D1.MIDINIT, D1.LASTNAME,2, D1.DEPTNO,D1.DEPTNAME,E2.EMPNO,E2.FIRSTNME,E2.MIDINIT, E2.LASTNAME FROM DSN8810.VDEPMG1 D1, DSN8810.EMP E2 WHERE D1.DEPTNO = E2.WORKDEPT; Figure 261. VASTRDE2 CREATE VIEW DSN8810.VPROJRE1 (PROJNO,PROJNAME,PROJDEP,RESPEMP,FIRSTNME,MIDINIT, LASTNAME,MAJPROJ) AS SELECT ALL PROJNO,PROJNAME,DEPTNO,EMPNO,FIRSTNME,MIDINIT, LASTNAME,MAJPROJ FROM DSN8810.PROJ, DSN8810.EMP WHERE RESPEMP = EMPNO ; Figure 262. VPROJRE1 CREATE VIEW DSN8810.VPSTRDE1 (PROJ1NO,PROJ1NAME,RESP1NO,RESP1FN,RESP1MI,RESP1LN, PROJ2NO,PROJ2NAME,RESP2NO,RESP2FN,RESP2MI,RESP2LN) AS SELECT ALL P1.PROJNO,P1.PROJNAME,P1.RESPEMP,P1.FIRSTNME,P1.MIDINIT, P1.LASTNAME, P2.PROJNO,P2.PROJNAME,P2.RESPEMP,P2.FIRSTNME,P2.MIDINIT, P2.LASTNAME FROM DSN8810.VPROJRE1 P1, DSN8810.VPROJRE1 P2 WHERE P1.PROJNO = P2.MAJPROJ ; Figure 263. VPSTRDE1
1007
CREATE VIEW DSN8810.VPSTRDE2 (PROJ1NO,PROJ1NAME,RESP1NO,RESP1FN,RESP1MI,RESP1LN, PROJ2NO,PROJ2NAME,RESP2NO,RESP2FN,RESP2MI,RESP2LN) AS SELECT ALL P1.PROJNO,P1.PROJNAME,P1.RESPEMP,P1.FIRSTNME,P1.MIDINIT, P1.LASTNAME, P1.PROJNO,P1.PROJNAME,P1.RESPEMP,P1.FIRSTNME,P1.MIDINIT, P1.LASTNAME FROM DSN8810.VPROJRE1 P1 WHERE NOT EXISTS (SELECT * FROM DSN8810.VPROJRE1 P2 WHERE P1.PROJNO = P2.MAJPROJ) ; Figure 264. VPSTRDE2
CREATE VIEW DSN8810.VFORPLA (PROJNO,PROJNAME,RESPEMP,PROJDEP,FRSTINIT,MIDINIT,LASTNAME) AS SELECT ALL F1.PROJNO,PROJNAME,RESPEMP,PROJDEP, SUBSTR(FIRSTNME, 1, 1), MIDINIT, LASTNAME FROM DSN8810.VPROJRE1 F1 LEFT OUTER JOIN DSN8810.EMPPROJACT F2 ON F1.PROJNO = F2.PROJNO; Figure 265. VFORPLA CREATE VIEW DSN8810.VSTAFAC1 (PROJNO, ACTNO, ACTDESC, EMPNO, FIRSTNME, MIDINIT, LASTNAME, EMPTIME,STDATE,ENDATE, TYPE) AS SELECT ALL PA.PROJNO, PA.ACTNO, AC.ACTDESC, , , , , PA.ACSTAFF, PA.ACSTDATE, PA.ACENDATE,1 FROM DSN8810.PROJACT PA, DSN8810.ACT AC WHERE PA.ACTNO = AC.ACTNO ; Figure 266. VSTAFAC1 CREATE VIEW DSN8810.VSTAFAC2 (PROJNO, ACTNO, ACTDESC, EMPNO, FIRSTNME, MIDINIT, LASTNAME, EMPTIME,STDATE, ENDATE, TYPE) AS SELECT ALL EP.PROJNO, EP.ACTNO, AC.ACTDESC, EP.EMPNO,EM.FIRSTNME, EM.MIDINIT, EM.LASTNAME, EP.EMPTIME, EP.EMSTDATE, EP.EMENDATE,2 FROM DSN8810.EMPPROJACT EP, DSN8810.ACT AC, DSN8810.EMP EM WHERE EP.ACTNO = AC.ACTNO AND EP.EMPNO = EM.EMPNO ; Figure 267. VSTAFAC2
1008
CREATE VIEW DSN8810.VPHONE (LASTNAME, FIRSTNAME, MIDDLEINITIAL, PHONENUMBER, EMPLOYEENUMBER, DEPTNUMBER, DEPTNAME) AS SELECT ALL LASTNAME, FIRSTNME, MIDINIT , VALUE(PHONENO, EMPNO, DEPTNO, DEPTNAME FROM DSN8810.EMP, DSN8810.DEPT WHERE WORKDEPT = DEPTNO; Figure 268. VPHONE
),
CREATE VIEW DSN8810.VEMPLP (EMPLOYEENUMBER, PHONENUMBER) AS SELECT ALL EMPNO , PHONENO FROM DSN8810.EMP ; Figure 269. VEMPLP
Storage group:
DSN8Gvr0
Databases:
In addition to the storage group and databases shown in Figure 270, the storage group DSN8G81U and database DSN8D81U are created when you run DSNTEJ2A.
1009
Storage group
The default storage group, SYSDEFLT, created when DB2 is installed, is not used to store sample application data. The storage group used to store sample application data is defined by this statement:
CREATE STOGROUP DSN8G810 VOLUMES (DSNV01) VCAT DSNC810;
Databases
| | | The default database, created when DB2 is installed, is not used to store the sample application data. DSN8D81P is the database that is used for tables that are related to programs. The remainder of the databases are used for tables that are related to applications. They are defined by the following statements:
CREATE DATABASE DSN8D81A STOGROUP DSN8G810 BUFFERPOOL BP0 CCSID EBCDIC; CREATE DATABASE DSN8D81P STOGROUP DSN8G810 BUFFERPOOL BP0 CCSID EBCDIC; CREATE DATABASE DSN8D81L STOGROUP DSN8G810 BUFFERPOOL BP0 CCSID EBCDIC;
| | | | | | | |
CREATE DATABASE DSN8D81E STOGROUP DSN8G810 BUFFERPOOL BP0 CCSID UNICODE; CREATE DATABASE DSN8D81U STOGROUP DSN8G81U CCSID EBCDIC;
Table spaces
The following table spaces are explicitly defined by the following statements. The table spaces not explicitly defined are created implicitly in the DSN8D81A database, using the default space attributes.
CREATE TABLESPACE DSN8S81D IN DSN8D81A USING STOGROUP DSN8G810 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81E IN DSN8D81A USING STOGROUP DSN8G810 PRIQTY 20 SECQTY 20 ERASE NO NUMPARTS 4 (PART 1 USING STOGROUP DSN8G810 PRIQTY 12
1010
SECQTY 12, PART 3 USING STOGROUP DSN8G810 PRIQTY 12 SECQTY 12) LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO COMPRESS YES CCSID EBCDIC; CREATE TABLESPACE DSN8S81B IN DSN8D81L USING STOGROUP DSN8G810 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE LOB TABLESPACE DSN8S81M IN DSN8D81L LOG NO; CREATE LOB TABLESPACE DSN8S81L IN DSN8D81L LOG NO; CREATE LOB TABLESPACE DSN8S81N IN DSN8D81L LOG NO; CREATE TABLESPACE DSN8S81C IN DSN8D81P USING STOGROUP DSN8G810 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE TABLE BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81P IN DSN8D81A USING STOGROUP DSN8G810 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE ROW BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81R IN DSN8D81A USING STOGROUP DSN8G810 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81S IN DSN8D81A USING STOGROUP DSN8G810
Appendix A. DB2 sample tables
1011
PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC;
| | | | | | | | | | | | | | | | | | | |
CREATE TABLESPACE DSN8S81Q IN DSN8D81P USING STOGROUP DSN8G810 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE PAGE BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81U IN DSN8D81E USING STOGROUP DSN8G810 PRIQTY 5 SECQTY 5 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID UNICODE;
1012
1013
v IFI applications These applications let you pass DB2 commands from a client program to a stored procedure, which runs the commands at a DB2 server using the instrumentation facility interface (IFI). There are two sets of client programs and stored procedures. One set has a PL/I client and stored procedure; the other set has a C client and stored procedure. v ODBA application This application demonstrates how you can use the IMS ODBA interface to access IMS databases from stored procedures. The stored procedure accesses the IMS sample DL/I database. The client program and the stored procedure are written in COBOL. v Utilities stored procedure application This application demonstrates how to call the utilities stored procedure. For more information on the utilities stored procedure, see Appendix B of DB2 Utility Guide and Reference. v SQL procedure applications These applications demonstrate how to write, prepare, and invoke SQL procedures. One set of applications demonstrates how to prepare SQL procedures using JCL. The other set of applications shows how to prepare SQL procedures using the SQL procedure processor. The client programs are written in C. v WLM refresh application This application is a client program that calls the DB2supplied stored procedure WLM_REFRESH to refresh a WLM environment. This program is written in C. v System parameter reporting application This application is a client program that calls the DB2supplied stored procedure DSNWZP to display the current settings of system parameters. This program is written in C. All stored procedure applications run in the TSO batch environment. User-defined function applications: The user-defined function applications consist of a client program that invokes the sample user-defined functions and a set of user-defined functions that perform the following functions: v Convert the current date to a user-specified format v Convert a date from one format to another v Convert the current time to a user-specified format v Convert a date from one format to another v Return the day of the week for a user-specified date v Return the month for a user-specified date v Format a floating point number as a currency value v Return the table name for a table, view, or alias v Return the qualifier for a table, view or alias v Return the location for a table, view or alias v Return a table of weather information All programs are written in C or C++ and run in the TSO batch environment. LOB application: The LOB application demonstrates how to perform the following tasks: v Define DB2 objects to hold LOB data v Populate DB2 tables with LOB data using the LOAD utility, or using INSERT and UPDATE statements when the data is too large for use with the LOAD utility
1014
v Manipulate the LOB data using LOB locators The programs that create and populate the LOB objects use DSNTIAD and run in the TSO batch environment. The program that manipulates the LOB data is written in C and runs under ISPF/TSO.
COBOL
PL/I C SQL
C C++
Application programs: Tables 190 through 192 on pages 1016 through 1018 provide the program names, JCL member names, and a brief description of some of the programs included for each of the three environments: TSO, IMS, and CICS.
Appendix B. Sample applications
1015
TSO
Table 190. Sample DB2 applications for TSO Preparation JCL member name 1 DSNTEJ2C Attachment facility DSNELI
Application Phone
Description This COBOL batch program lists employee telephone numbers and updates them if requested. This C batch program lists employee telephone numbers and updates them if requested. This C++ batch program lists employee telephone numbers and updates them if requested. This PL/I batch program lists employee telephone numbers and updates them if requested. This FORTRAN program lists employee telephone numbers and updates them if requested. This COBOL ISPF program displays and updates information about a local department. It can also display and update information about an employee at a local or remote location. This COBOL ISPF program lists employee telephone numbers and updates them if requested. This PL/I ISPF program lists employee telephone numbers and updates them if requested. This assembler language program allows you to unload the data from a table or view and to produce LOAD utility control statements for the data. This assembler language program dynamically executes non-SELECT statements read in from SYSIN; that is, it uses dynamic SQL to execute non-SELECT SQL statements. This PL/I program dynamically executes SQL statements read in from SYSIN. Unlike DSNTIAD, this application can also execute SELECT statements.
Phone Phone
DSN8BD3 DSN8BE3
DSNTEJ2D DSNTEJ2E
DSNELI DSNELI
Phone
DSN8BP3
DSNTEJ2P
DSNELI
Phone
DSN8BF3
DSNTEJ2F
DSNELI
Organization
DSN8HC3
DSNTEJ3C or DSNTEJ6
DSNALI
Phone
DSN8SC3
DSNTEJ3C
DSNALI
Phone
DSN8SP3
DSNTEJ3P
DSNALI
UNLOAD
DSNTIAUL
DSNTEJ2A
DSNELI
Dynamic SQL
DSNTIAD
DSNTIJTM
DSNELI
Dynamic SQL
DSNTEP2
DSNTEJ1P or DSNTEJ1L
DSNELI
1016
Table 190. Sample DB2 applications for TSO (continued) Preparation JCL member name 1 DSNTEJ6P DSNTEJ6S DSNTEJ6U DSNTEJ6D DSNTEJ6T DSNTEJ61 DSNTEJ62 DSNTEJ63 DSNTEJ64 DSNTEJ65 DSNTEJ6W DSNTEJ6Z DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ71 DSNTEJ71 DSNTEJ73 DSNTEJ75 Attachment facility DSNELI DSNALI DSNELI DSNELI DSNALI DSNRLI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI DSNELI These applications demonstrate how to populate a LOB column that is greater than 32KB, manipulate the data using the POSSTR and SUBSTR built-in functions, and display the data in ISPF using GDDM.
Program name DSN8EP1 DSN8EP2 DSN8EPU DSN8ED1 DSN8ED2 DSN8EC1 DSN8EC2 DSN8ES1 DSN8ED3 DSN8ES2 DSN8ED6 DSN8ED7
Description These applications consist of a calling program, a stored procedure program, or both. Samples that are prepared by jobs DSNTEJ6P, DSNTEJ6S, DSNTEJ6D, and DSNTEJ6T execute DB2 commands using the instrumentation facility interface (IFI). DSNTEJ6P and DSNTEJ6S prepare a PL/I version of the application. DSNTEJ6D and DSNTEJ6T prepare a version in C. The C stored procedure uses result sets to return commands to the client. The sample that is prepared by DSNTEJ61 and DSNTEJ62 demonstrates a stored procedure that accesses IMS databases through the ODBA interface. The sample that is prepared by DSNTEJ6U invokes the utilities stored procedure. The sample that is prepared by jobs DSNTEJ63 and DSNTEJ64 demonstrates how to prepare an SQL procedure using JCL. The sample that is prepared by job DSNTEJ65 demonstrates how to prepare an SQL procedure using the SQL procedure processor. The sample that is prepared by job DSNTEJ6W demonstrates how to prepare and run a client program that calls a DB2supplied stored procedure to refresh a WLM environment. The sample that is prepared by job DSNTEJ6Z demonstrates how to prepare and run a client program that calls a DB2supplied stored procedure to display the current settings of system parameters. These applications consist of a set of user-defined scalar functions that can be invoked through SPUFI or DSNTEP2 and one user-defined table function, DSN8DUWF, that can be invoked by client program DSN8DUWC. DSN8EUDN and DSN8EUMN are written in C++. All other programs are written in C.
User-defined functions
DSN8DUAD DSN8DUAT DSN8DUCD DSN8DUCT DSN8DUCY DSN8DUTI DSN8DUWC DSN8DUWF DSN8EUDN DSN8EUMN
LOBs
# #
Notes: 1. For information about the DD statements in the sample JCL, see Using the DB2 precompiler on page 474.
1017
IMS
Table 191. Sample DB2 applications for IMS Application Organization Program name DSN8IC0 DSN8IC1 DSN8IC2 DSN8IP0 DSN8IP1 DSN8IP2 DSN8IP6 DSN8IP7 DSN8IP8 DSN8IP3 JCL member name1 DSNTEJ4C Description IMS COBOL Organization Application IMS PL/I Organization Application IMS PL/I Project Application IMS PL/I Phone Application. This program lists employee telephone numbers and updates them if requested.
Organization
DSNTEJ4P
Project
DSNTEJ4P
Phone
DSNTEJ4P
# #
Notes: 1. For information about the DD statements in the sample JCL, see Using the DB2 precompiler on page 474.
CICS
Table 192. Sample DB2 applications for CICS Application Organization Program name DSN8CC0 DSN8CC1 DSN8CC2 DSN8CP0 DSN8CP1 DSN8CP2 DSN8CP6 DSN8CP7 DSN8CP8 DSN8CP3 JCL member name1 DSNTEJ5C Description CICS COBOL Organization Application CICS PL/I Organization Application CICS PL/I Project Application CICS PL/I Phone Application. This program lists employee telephone numbers and updates them if requested.
Organization
DSNTEJ5P
Project
DSNTEJ5P
Phone
DSNTEJ5P
# #
Notes: 1. For information about the DD statements in the sample JCL, see Using the DB2 precompiler on page 474.
1018
DSNTIAD
DSNTEP2
| | | | | | |
DSNTEP4
Because these four programs also accept the static SQL statements CONNECT, SET CONNECTION, and RELEASE, you can use the programs to access DB2 tables at remote locations. # # # # Retrieval of UTF-16 Unicode data: You can use DSNTEP2, DSNTEP4, and DSNTIAUL to retrieve Unicode UTF-16 graphic data. However, these programs might not be able to display some characters, if those characters have no mapping in the target SBCS EBCDIC CCSID. DSNTIAUL and DSNTIAD are shipped only as source code, so you must precompile, assemble, link, and bind them before you can use them. If you want to use the source code version of DSNTEP2 or DSNTEP4, you must precompile, compile, link, and bind it. You need to bind the object code version of DSNTEP2 or DSNTEP4 before you can use it. Usually a system administrator prepares the programs as part of the installation process. Table 193 on page 1020 indicates which installation job prepares each sample program. All installation jobs are in data set DSN810.SDSNSAMP.
Copyright IBM Corp. 1983, 2008
| | |
1019
Table 193. Jobs that prepare DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4 Program name DSNTIAUL DSNTIAD DSNTEP2 (source) DSNTEP2 (object) DSNTEP4 (source) DSNTEP4 (object) Program preparation job DSNTEJ2A DSNTIJTM DSNTEJ1P DSNTEJ1L DSNTEJ1P DSNTEJ1L
To run the sample programs, use the DSN RUN command, which is described in detail in Chapter 2 of DB2 Command Reference. Table 194 lists the load module name and plan name that you must specify, and the parameters that you can specify when you run each program. See the following sections for the meaning of each parameter.
Table 194. DSN RUN option values for DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4 Program name DSNTIAUL Load module DSNTIAUL Plan DSNTIB81 Parameters SQL number of rows per fetch TOLWARN(NO|YES) RC0 SQLTERM(termchar) ALIGN(MID) or ALIGN(LHS) NOMIXED or MIXED SQLTERM(termchar) TOLWARN(NO|YES) ALIGN(MID) or ALIGN(LHS) NOMIXED or MIXED SQLTERM(termchar) TOLWARN(NO|YES)
#
DSNTIAD DSNTEP2 DSNTIAD DSNTEP2 DSNTIA81 DSNTEP81
#
DSNTEP4 DSNTEP4 DSNTEP481
The remainder of this chapter contains the following information about running each program: v Descriptions of the input parameters v Data sets that you must allocate before you run the program v Return codes from the program v Examples of invocation See the sample jobs that are listed in Table 193 for a working example of each program.
Running DSNTIAUL
This section contains information that you need when you run DSNTIAUL, including parameters, data sets, return codes, and invocation examples. # # To retrieve data from a remote site by using the multi-row fetch capability for enhanced performance, bind DSNTIAUL with the DBPROTOCOL(DRDA) option.
1020
# # #
To run DSNTIAUL remotely when it is bound with the DBPROTOCOL(PRIVATE) option, switch DSNTIAUL to single-row fetch mode by specifying 1 for the number of rows per fetch parameter. DSNTIAUL parameters: SQL Specify SQL to indicate that your input data set contains one or more complete SQL statements, each of which ends with a semicolon. You can include any SQL statement that can be executed dynamically in your input data set. In addition, you can include the static SQL statements CONNECT, SET CONNECTION, or RELEASE. DSNTIAUL uses the SELECT statements to determine which tables to unload and dynamically executes all other statements except CONNECT, SET CONNECTION, and RELEASE. DSNTIAUL executes CONNECT, SET CONNECTION, and RELEASE statically to connect to remote locations.
| | | | # # # # # # # # # # # # #
number of rows per fetch Specify a number from 1 to 32767 to specify the number of rows per fetch that DSNTIAUL retrieves. If you do not specify this number, DSNTIAUL retrieves 100 rows per fetch. This parameter can be specified with the SQL parameter. Specify 1 to retrieve data from a remote site when DSNTIAUL is bound with the DBPROTOCOL(PRIVATE) option. TOLWARN Specify NO (the default) or YES to indicate whether DSNTIAUL continues to retrieve rows after receiving an SQL warning: NO If a warning occurs when DSNTIAUL executes an OPEN or FETCH to retrieve rows, DSNTIAUL stops retrieving rows. If the SQLWARN1, SQLWARN2, SQLWARN6, or SQLWARN7 flag is set when DSNTIAUL executes a FETCH to retrieve rows, DSNTIAUL continues to retrieve rows. Exception: YES If a warning occurs when DSNTIAUL executes an OPEN or FETCH to retrieve rows, DSNTIAUL continues to retrieve rows.
If you do not specify the SQL parameter, your input data set must contain one or more single-line statements (without a semicolon) that use the following syntax:
table or view name [WHERE conditions] [ORDER BY columns]
Each input statement must be a valid SQL SELECT statement with the clause SELECT * FROM omitted and with no ending semicolon. DSNTIAUL generates a SELECT statement for each input statement by appending your input line to SELECT * FROM, then uses the result to determine which tables to unload. For this input format, the text for each table specification can be a maximum of 72 bytes and must not span multiple lines. You can use the input statements to specify SELECT statements that join two or more tables or select specific columns from a table. If you specify columns, you need to modify the LOAD statement that DSNTIAUL generates. DSNTIAUL data sets: Data set SYSIN Description Input data set.
Appendix C. Running the productivity-aid sample programs
1021
You cannot enter comments in DSNTIAUL input. The record length for the input data set must be at least 72 bytes. DSNTIAUL reads only the first 72 bytes of each record. SYSPRINT Output data set. DSNTIAUL writes informational and error messages in this data set. The record length for the SYSPRINT data set is 121 bytes. SYSPUNCH SYSRECnn Output data set. DSNTIAUL writes the LOAD utility control statements in this data set. Output data sets. The value nn ranges from 00 to 99. You can have a maximum of 100 output data sets for a single execution of DSNTIAUL. Each data set contains the data that is unloaded when DSNTIAUL processes a SELECT statement from the input data set. Therefore, the number of output data sets must match the number of SELECT statements (if you specify parameter SQL) or table specifications in your input data set.
Define all data sets as sequential data sets. You can specify the record length and block size of the SYSPUNCH and SYSRECnn data sets. The maximum record length for the SYSPUNCH and SYSRECnn data sets is 32760 bytes. DSNTIAUL return codes:
Table 195. DSNTIAUL return codes Return code 0 4 Meaning Successful completion. An SQL statement received a warning code. If the SQL statement was a SELECT statement, DB2 did not perform the associated unload operation. If DB2 return a +394, which indicates that it is using optimization hints, DB2 performs the unload operation. An SQL statement received an error code. If the SQL statement was a SELECT statement, DB2 did not perform the associated unload operation. DSNTIAUL could not open a data set, an SQL statement returned a severe error code (-8nn or -9nn), or an error occurred in the SQL message formatting routine.
8 12
Examples of DSNTIAUL invocation: Suppose that you want to unload the rows for department D01 from the project table. Because you can fit the table specification on one line, and you do not want to execute any non-SELECT statements, you do not need the SQL parameter. Your invocation looks like the one that is shown in Figure 271 on page 1023:
1022
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB81) LIB(DSN810.RUNLIB.LOAD) //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSREC00 DD DSN=DSN8UNLD.SYSREC00, // UNIT=SYSDA,SPACE=(32760,(1000,500)),DISP=(,CATLG), // VOL=SER=SCR03 //SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH, // UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG), // VOL=SER=SCR03,RECFM=FB,LRECL=120,BLKSIZE=1200 //SYSIN DD * DSN8810.PROJ WHERE DEPTNO='D01' Figure 271. DSNTIAUL invocation without the SQL parameter
If you want to obtain the LOAD utility control statements for loading rows into a table, but you do not want to unload the rows, you can set the data set names for the SYSRECnn data sets to DUMMY. For example, to obtain the utility control statements for loading rows into the department table, you invoke DSNTIAUL as shown in Figure 272:
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB81) LIB(DSN810.RUNLIB.LOAD) //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSREC00 DD DUMMY //SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH, // UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG), // VOL=SER=SCR03,RECFM=FB,LRECL=120,BLKSIZE=1200 //SYSIN DD * DSN8810.DEPT Figure 272. DSNTIAUL invocation to obtain LOAD control statements
Now suppose that you also want to use DSNTIAUL to do these things: v Unload all rows from the project table v Unload only rows from the employee table for employees in departments with department numbers that begin with D, and order the unloaded rows by employee number v Lock both tables in share mode before you unload them v Retrieve 250 rows per fetch For these activities, you must specify the SQL parameter and specify the number of rows per fetch when you run DSNTIAUL. Your DSNTIAUL invocation is shown in Figure 273 on page 1024:
| | |
1023
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB81) PARMS('SQL,250') LIB(DSN810.RUNLIB.LOAD) //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSREC00 DD DSN=DSN8UNLD.SYSREC00, // UNIT=SYSDA,SPACE=(32760,(1000,500)),DISP=(,CATLG), // VOL=SER=SCR03 //SYSREC01 DD DSN=DSN8UNLD.SYSREC01, // UNIT=SYSDA,SPACE=(32760,(1000,500)),DISP=(,CATLG), // VOL=SER=SCR03 //SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH, // UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG), // VOL=SER=SCR03,RECFM=FB,LRECL=120,BLKSIZE=1200 //SYSIN DD * LOCK TABLE DSN8810.EMP IN SHARE MODE; LOCK TABLE DSN8810.PROJ IN SHARE MODE; SELECT * FROM DSN8810.PROJ; SELECT * FROM DSN8810.EMP WHERE WORKDEPT LIKE D% ORDER BY EMPNO; Figure 273. DSNTIAUL invocation with the SQL parameter
Running DSNTIAD
This section contains information that you need when you run DSNTIAD, including parameters, data sets, return codes, and invocation examples. DSNTIAD parameters: RC0 If you specify this parameter, DSNTIAD ends with return code 0, even if the program encounters SQL errors. If you do not specify RC0, DSNTIAD ends with a return code that reflects the severity of the errors that occur. Without RC0, DSNTIAD terminates if more than 10 SQL errors occur during a single execution. SQLTERM(termchar) Specify this parameter to indicate the character that you use to end each SQL statement. You can use any special character except one of those listed in Table 196. SQLTERM(;) is the default.
Table 196. Invalid special characters for the SQL terminator Name blank comma double quotation mark left parenthesis right parenthesis single quotation mark underscore , " ( ) ' _ Character Hexadecimal representation X'40' X'6B' X'7F' X'4D' X'5D' X'7D' X'6D'
1024
Use a character other than a semicolon if you plan to execute a statement that contains embedded semicolons. Example: Suppose that you specify the parameter SQLTERM(#) to indicate that the character # is the statement terminator. Then a CREATE TRIGGER statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END#
# # # # # # # # # #
A CREATE PROCEDURE statement with embedded semicolons looks like the following statement:
CREATE PROCEDURE PROC1 (IN PARM1 INT, OUT SCODE INT) LANGUAGE SQL BEGIN DECLARE SQLCODE INT; DECLARE EXIT HANDLER FOR SQLEXCEPTION SET SCODE = SQLCODE; UPDATE TBL1 SET COL1 = PARM1; END #
Be careful to choose a character for the statement terminator that is not used within the statement. DSNTIAD data sets: Data set SYSIN Description Input data set. In this data set, you can enter any number of non-SELECT SQL statements, each terminated with a semicolon. A statement can span multiple lines, but DSNTIAD reads only the first 72 bytes of each line. You cannot enter comments in DSNTIAD input. SYSPRINT Output data set. DSNTIAD writes informational and error messages in this data set. DSNTIAD sets the record length of this data set to 121 bytes and the block size to 1210 bytes.
Define all data sets as sequential data sets. DSNTIAD return codes:
Table 197. DSNTIAD return codes Return code 0 4 8 12 Meaning Successful completion, or the user-specified parameter RC0. An SQL statement received a warning code. An SQL statement received an error code. DSNTIAD could not open a data set, the length of an SQL statement was more than 32760 bytes, an SQL statement returned a severe error code (-8nn or -9nn), or an error occurred in the SQL message formatting routine.
Example of DSNTIAD invocation: Suppose that you want to execute 20 UPDATE statements, and you do not want DSNTIAD to terminate if more than 10 errors
1025
occur. Your invocation looks like the one that is shown in Figure 274:
//RUNTIAD EXEC PGM=IKJEFT01,DYNAMNBR=20 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA81) PARMS('RC0') LIB(DSN810.RUNLIB.LOAD) //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSIN DD * UPDATE DSN8810.PROJ SET DEPTNO='J01' WHERE DEPTNO='A01'; UPDATE DSN8810.PROJ SET DEPTNO='J02' WHERE DEPTNO='A02'; . . . UPDATE DSN8810.PROJ SET DEPTNO='J20' WHERE DEPTNO='A20'; Figure 274. DSNTIAD Invocation with the RC0 Parameter
| | |
1026
# # # # # # # # # #
A CREATE PROCEDURE statement with embedded semicolons looks like the following statement:
CREATE PROCEDURE PROC1 (IN PARM1 INT, OUT SCODE INT) LANGUAGE SQL BEGIN DECLARE SQLCODE INT; DECLARE EXIT HANDLER FOR SQLEXCEPTION SET SCODE = SQLCODE; UPDATE TBL1 SET COL1 = PARM1; END #
Be careful to choose a character for the statement terminator that is not used within the statement. If you want to change the SQL terminator within a series of SQL statements, you can use the --#SET TERMINATOR control statement. Example: Suppose that you have an existing set of SQL statements to which you want to add a CREATE TRIGGER statement that has embedded semicolons. You can use the default SQLTERM value, which is a semicolon, for all of the existing SQL statements. Before you execute the CREATE TRIGGER statement, include the --#SET TERMINATOR # control statement to change the SQL terminator to the character #:
SELECT * FROM DEPT; SELECT * FROM ACT; SELECT * FROM EMPPROJACT; SELECT * FROM PROJ; SELECT * FROM PROJACT; --#SET TERMINATOR # CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END#
See the following discussion of the SYSIN data set for more information about the --#SET control statement. # # # # # # # # # # # # # # # | TOLWARN Specify NO (the default) or YES to indicate whether DSNTEP2 or DSNTEP4 continues to process SQL SELECT statements after receiving an SQL warning: NO If a warning occurs when DSNTEP2 or DSNTEP4 executes an OPEN or FETCH for a SELECT statement, DSNTEP2 or DSNTEP4 stops processing the SELECT statement. If SQLCODE +445 or SQLCODE +595 occurs when DSNTEP2 or DSNTEP4 executes a FETCH for a SELECT statement, DSNTEP2 or DSNTEP4 continues to process the SELECT statement. If SQLCODE +802 occurs when DSNTEP2 or DSNTEP4 executes a FETCH for a SELECT statement, DSNTEP2 or DSNTEP4 continues to process the SELECT statement if the TOLARTHWRN control statement is set to YES. If a warning occurs when DSNTEP2 or DSNTEP4 executes an OPEN or FETCH for a SELECT statement, DSNTEP2 or DSNTEP4 continues to process the SELECT statement.
YES
DSNTEP2 and DSNTEP4 data sets: Data Set SYSIN Description Input data set. In this data set, you can enter any number of SQL
Appendix C. Running the productivity-aid sample programs
1027
| |
statements, each terminated with a semicolon. A statement can span multiple lines, but DSNTEP2 or DSNTEP4 reads only the first 72 bytes of each line. You can enter comments in DSNTEP2 or DSNTEP4 input with an asterisk (*) in column 1 or two hyphens (--) anywhere on a line. Text that follows the asterisk is considered to be comment text. Text that follows two hyphens can be comment text or a control statement. Comments are not considered in dynamic statement caching. Comments and control statements cannot span lines. You can enter control statements of the following form in the DSNTEP2 and DSNTEP4 input data set:
--#SET control-option value
| | |
The control options are: TERMINATOR The SQL statement terminator. value is any single-byte character other than one of those that are listed in Table 196 on page 1024. The default is the value of the SQLTERM parameter. ROWS_FETCH The number of rows that are to be fetched from the result table. value is a numeric literal between -1 and the number of rows in the result table. -1 means that all rows are to be fetched. The default is -1. ROWS_OUT The number of fetched rows that are to be sent to the output data set. value is a numeric literal between -1 and the number of fetched rows. -1 means that all fetched rows are to be sent to the output data set. The default is -1. | | | | | # # # # # # # # # # # # | | SYSPRINT MULT_FETCH This option is valid only for DSNTEP4. Use MULT_FETCH to specify the number of rows that are to be fetched at one time from the result table. The default fetch amount for DSNTEP4 is 100 rows, but you can specify from 1 to 32676 rows. TOLWARN Indicates whether DSNTEP2 and DSNTEP4 continue to process an SQL SELECT after an SQL warning is returned. value is either NO (the default) or YES. TOLARTHWRN Indicates whether DSNTEP2 and DSNTEP4 continue to process an SQL SELECT statement after an arithmetic SQL warning (SQLCODE +802) is returned. value is either NO (the default) or YES. MAXERRORS Specifies that number of errors that DSNTEP2 and DSNTEP4 handle before processing stops. The default is 10. Output data set. DSNTEP2 and DSNTEP4 write informational and error messages in this data set. DSNTEP2 and DSNTEP4 write output records of no more than 133 bytes.
1028
| |
Example of DSNTEP2 invocation: Suppose that you want to use DSNTEP2 to execute SQL SELECT statements that might contain DBCS characters. You also want left-aligned output. Your invocation looks like the one in Figure 275:
//RUNTEP2 EXEC PGM=IKJEFT01,DYNAMNBR=20 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTEP2) PLAN(DSNTEP81) PARMS('/ALIGN(LHS) MIXED') LIB(DSN810.RUNLIB.LOAD) //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSIN DD * SELECT * FROM DSN8810.PROJ; Figure 275. DSNTEP2 invocation with the ALIGN(LHS) and MIXED parameters
| | | | | | | | | | | | | | | | | | |
Example of DSNTEP4 invocation: Suppose that you want to use DSNTEP4 to execute SQL SELECT statements that might contain DBCS characters, and you want center-aligned output. You also want DSNTEP4 to fetch 250 rows at a time. Your invocation looks like the one in Figure 276:
//RUNTEP2 EXEC PGM=IKJEFT01,DYNAMNBR=20 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTEP4) PLAN(DSNTEP481) PARMS('/ALIGN(MID) MIXED') LIB(DSN810.RUNLIB.LOAD) //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSIN DD * --#SET MULT_FETCH 250 SELECT * FROM DSN8810.EMP; Figure 276. DSNTEP4 invocation with the ALIGN(MID) and MIXED parameters and using the MULT_FETCH control option
1029
1030
Storage allocation
COBOL does not provide a means to allocate main storage within a program. You can achieve the same end by having an initial program which allocates the storage, and then calls a second program that manipulates the pointer. (COBOL does not permit you to directly manipulate the pointer because errors and abends are likely to occur.)
1031
The initial program is extremely simple. It includes a working storage section that allocates the maximum amount of storage needed. This program then calls the second program, passing the area or areas on the CALL statement. The second program defines the area in the linkage section and can then use pointers within the area. If you need to allocate parts of storage, the best method is to use indexes or subscripts. You can use subscripts for arithmetic and comparison operations.
Example
Figure 277 shows an example of the initial program DSN8BCU1 that allocates the storage and calls the second program DSN8BCU2 shown in Figure 278 on page 1034. DSN8BCU2 then defines the passed storage areas in its linkage section and includes the USING clause on its PROCEDURE DIVISION statement. Defining the pointers, then redefining them as numeric, permits some manipulation of the pointers that you cannot perform directly. For example, you cannot add the column length to the record pointer, but you can add the column length to the numeric value that redefines the pointer.
**** DSN8BCU1- DB2 SAMPLE BATCH COBOL UNLOAD PROGRAM *********** * * * MODULE NAME = DSN8BCU1 * * * * DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION * * UNLOAD PROGRAM * * BATCH * * ENTERPRISE COBOL FOR Z/OS OR * * IBM COBOL FOR MVS & VM * * * * FUNCTION = THIS MODULE PROVIDES THE STORAGE NEEDED BY * * DSN8BCU2 AND CALLS THAT PROGRAM. * * * * NOTES = * * DEPENDENCIES = NONE. * * * * RESTRICTIONS = * * THE MAXIMUM NUMBER OF COLUMNS IS 750, * * WHICH IS THE SQL LIMIT. * * * * DATA RECORDS ARE LIMITED TO 32700 BYTES, * * INCLUDING DATA, LENGTHS FOR VARCHAR DATA, * * AND SPACE FOR NULL INDICATORS. * * * * MODULE TYPE = COBOL PROGRAM * * PROCESSOR = ENTERPRISE COBOL FOR Z/OS OR * * IBM COBOL FOR MVS & VM * * MODULE SIZE = SEE LINK EDIT * * ATTRIBUTES = REENTRANT * * * Figure 277. Initial program that allocates storage (Part 1 of 2)
1032
* ENTRY POINT = DSN8BCU1 * * PURPOSE = SEE FUNCTION * * LINKAGE = INVOKED FROM DSN RUN * * INPUT = NONE * * OUTPUT = NONE * * * * EXIT-NORMAL = RETURN CODE 0 NORMAL COMPLETION * * * * EXIT-ERROR = * * RETURN CODE = NONE * * ABEND CODES = NONE * * ERROR-MESSAGES = NONE * * * * EXTERNAL REFERENCES = * * ROUTINES/SERVICES = * * DSN8BCU2 - ACTUAL UNLOAD PROGRAM * * * * DATA-AREAS = NONE * * CONTROL-BLOCKS = NONE * * * * TABLES = NONE * * CHANGE-ACTIVITY = NONE * * * * *PSEUDOCODE* * * * * PROCEDURE * * CALL DSN8BCU2. * * END. * *---------------------------------------------------------------* / IDENTIFICATION DIVISION. *----------------------PROGRAM-ID. DSN8BCU1 * ENVIRONMENT DIVISION. * CONFIGURATION SECTION. DATA DIVISION. * WORKING-STORAGE SECTION. * 01 WORKAREA-IND. 02 WORKIND PIC S9(4) COMP OCCURS 750 TIMES. 01 RECWORK. 02 RECWORK-LEN PIC S9(8) COMP VALUE 32700. 02 RECWORK-CHAR PIC X(1) OCCURS 32700 TIMES. * PROCEDURE DIVISION. * CALL DSN8BCU2 USING WORKAREA-IND RECWORK. GOBACK. Figure 277. Initial program that allocates storage (Part 2 of 2)
1033
**** DSN8BCU2- DB2 SAMPLE BATCH COBOL UNLOAD PROGRAM *********** * * * MODULE NAME = DSN8BCU2 * * * * DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION * * UNLOAD PROGRAM * * BATCH * * ENTERPRISE COBOL FOR Z/OS OR * * IBM COBOL FOR MVS & VM * * * * FUNCTION = THIS MODULE ACCEPTS A TABLE NAME OR VIEW NAME * * AND UNLOADS THE DATA IN THAT TABLE OR VIEW. * * READ IN A TABLE NAME FROM SYSIN. * * PUT DATA FROM THE TABLE INTO DD SYSREC01. * * WRITE RESULTS TO SYSPRINT. * * * * NOTES = * * DEPENDENCIES = NONE. * * * * RESTRICTIONS = * * THE SQLDA IS LIMITED TO 33016 BYTES. * * THIS SIZE ALLOWS FOR THE DB2 MAXIMUM * * OF 750 COLUMNS. * * * * DATA RECORDS ARE LIMITED TO 32700 BYTES, * * INCLUDING DATA, LENGTHS FOR VARCHAR DATA, * * AND SPACE FOR NULL INDICATORS. * * * * TABLE OR VIEW NAMES ARE ACCEPTED, AND ONLY * * ONE NAME IS ALLOWED PER RUN. * * * * MODULE TYPE = COBOL PROGRAM * * PROCESSOR = DB2 PRECOMPILER * * ENTERPRISE COBOL FOR Z/OS OR * * IBM COBOL FOR MVS & VM * * MODULE SIZE = SEE LINK EDIT * * ATTRIBUTES = REENTRANT * * * * ENTRY POINT = DSN8BCU2 * * PURPOSE = SEE FUNCTION * * LINKAGE = * * CALL DSN8BCU2 USING WORKAREA-IND RECWORK. * * * * INPUT = SYMBOLIC LABEL/NAME = WORKAREA-IND * * DESCRIPTION = INDICATOR VARIABLE ARRAY * * 01 WORKAREA-IND. * * 02 WORKIND PIC S9(4) COMP OCCURS 750 TIMES. * * * * SYMBOLIC LABEL/NAME = RECWORK * * DESCRIPTION = WORK AREA FOR OUTPUT RECORD * * 01 RECWORK. * * 02 RECWORK-LEN PIC S9(8) COMP. * * * * SYMBOLIC LABEL/NAME = SYSIN * * DESCRIPTION = INPUT REQUESTS - TABLE OR VIEW * * *
Figure 278. Called program that does pointer manipulation (Part 1 of 10)
1034
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * SYMBOLIC LABEL/NAME = SYSREC01 * DESCRIPTION = UNLOADED TABLE DATA * * EXIT-NORMAL = RETURN CODE 0 NORMAL COMPLETION * EXIT-ERROR = * RETURN CODE = NONE * ABEND CODES = NONE * ERROR-MESSAGES = * DSNT490I SAMPLE COBOL DATA UNLOAD PROGRAM RELEASE 3.0* - THIS IS THE HEADER, INDICATING A NORMAL * - START FOR THIS PROGRAM. * DSNT493I SQL ERROR, SQLCODE = NNNNNNNN * - AN SQL ERROR OR WARNING WAS ENCOUNTERED * - ADDITIONAL INFORMATION FROM DSNTIAR * - FOLLOWS THIS MESSAGE. * DSNT495I SUCCESSFUL UNLOAD XXXXXXXX ROWS OF * TABLE TTTTTTTT * - THE UNLOAD WAS SUCCESSFUL. XXXXXXXX IS * - THE NUMBER OF ROWS UNLOADED. TTTTTTTT * - IS THE NAME OF THE TABLE OR VIEW FROM * - WHICH IT WAS UNLOADED. * DSNT496I UNRECOGNIZED DATA TYPE CODE OF NNNNN * - THE PREPARE RETURNED AN INVALID DATA * - TYPE CODE. NNNNN IS THE CODE, PRINTED * - IN DECIMAL. USUALLY AN ERROR IN * - THIS ROUTINE OR A NEW DATA TYPE. * DSNT497I RETURN CODE FROM MESSAGE ROUTINE DSNTIAR * - THE MESSAGE FORMATTING ROUTINE DETECTED * - AN ERROR. SEE THAT ROUTINE FOR RETURN * - CODE INFORMATION. USUALLY AN ERROR IN * - THIS ROUTINE. * DSNT498I ERROR, NO VALID COLUMNS FOUND * - THE PREPARE RETURNED DATA WHICH DID NOT * - PRODUCE A VALID OUTPUT RECORD. * - USUALLY AN ERROR IN THIS ROUTINE. * DSNT499I NO ROWS FOUND IN TABLE OR VIEW * - THE CHOSEN TABLE OR VIEWS DID NOT * - RETURN ANY ROWS. * ERROR MESSAGES FROM MODULE DSNTIAR * - WHEN AN ERROR OCCURS, THIS MODULE * - PRODUCES CORRESPONDING MESSAGES. * * EXTERNAL REFERENCES = * ROUTINES/SERVICES = * DSNTIAR - TRANSLATE SQLCA INTO MESSAGES * DATA-AREAS = NONE * CONTROL-BLOCKS = * SQLCA - SQL COMMUNICATION AREA * * TABLES = NONE * CHANGE-ACTIVITY = NONE * *
OUTPUT
Figure 278. Called program that does pointer manipulation (Part 2 of 10)
1035
* *PSEUDOCODE* * * PROCEDURE * * EXEC SQL DECLARE DT CURSOR FOR SEL END-EXEC. * * EXEC SQL DECLARE SEL STATEMENT END-EXEC. * * INITIALIZE THE DATA, OPEN FILES. * * OBTAIN STORAGE FOR THE SQLDA AND THE DATA RECORDS. * * READ A TABLE NAME. * * OPEN SYSREC01. * * BUILD THE SQL STATEMENT TO BE EXECUTED * * EXEC SQL PREPARE SQL STATEMENT INTO SQLDA END-EXEC. * * SET UP ADDRESSES IN THE SQLDA FOR DATA. * * INITIALIZE DATA RECORD COUNTER TO 0. * * EXEC SQL OPEN DT END-EXEC. * * DO WHILE SQLCODE IS 0. * * EXEC SQL FETCH DT USING DESCRIPTOR SQLDA END-EXEC. * * ADD IN MARKERS TO DENOTE NULLS. * * WRITE THE DATA TO SYSREC01. * * INCREMENT DATA RECORD COUNTER. * * END. * * EXEC SQL CLOSE DT END-EXEC. * * INDICATE THE RESULTS OF THE UNLOAD OPERATION. * * CLOSE THE SYSIN, SYSPRINT, AND SYSREC01 FILES. * * END. * *---------------------------------------------------------------* / IDENTIFICATION DIVISION. *----------------------PROGRAM-ID. DSN8BCU2 * ENVIRONMENT DIVISION. *-------------------CONFIGURATION SECTION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT SYSIN ASSIGN TO DA-S-SYSIN. SELECT SYSPRINT ASSIGN TO UT-S-SYSPRINT. SELECT SYSREC01 ASSIGN TO DA-S-SYSREC01. * DATA DIVISION. *------------* FILE SECTION. FD SYSIN RECORD CONTAINS 80 CHARACTERS BLOCK CONTAINS 0 RECORDS LABEL RECORDS ARE OMITTED RECORDING MODE IS F. 01 CARDREC PIC X(80). * FD SYSPRINT RECORD CONTAINS 120 CHARACTERS LABEL RECORDS ARE OMITTED DATA RECORD IS MSGREC RECORDING MODE IS F. 01 MSGREC PIC X(120).
Figure 278. Called program that does pointer manipulation (Part 3 of 10)
1036
* FD SYSREC01 RECORD CONTAINS 5 TO 32704 CHARACTERS LABEL RECORDS ARE OMITTED DATA RECORD IS REC01 RECORDING MODE IS V. REC01. 02 REC01-LEN PIC S9(8) COMP. 02 REC01-CHAR PIC X(1) OCCURS 1 TO 32700 TIMES DEPENDING ON REC01-LEN.
01
/ WORKING-STORAGE SECTION. * ***************************************************** * STRUCTURE FOR INPUT * ***************************************************** 01 IOAREA. 02 TNAME PIC X(72). 02 FILLER PIC X(08). 01 STMTBUF. 49 STMTLEN PIC S9(4) COMP VALUE 92. 49 STMTCHAR PIC X(92). 01 STMTBLD. 02 FILLER PIC X(20) VALUE SELECT * FROM. 02 STMTTAB PIC X(72). * ***************************************************** * REPORT HEADER STRUCTURE * ***************************************************** 01 HEADER. 02 FILLER PIC X(35) VALUE DSNT490I SAMPLE COBOL DATA UNLOAD . 02 FILLER PIC X(85) VALUE PROGRAM RELEASE 3.0. 01 MSG-SQLERR. 02 FILLER PIC X(31) VALUE DSNT493I SQL ERROR, SQLCODE = . 02 MSG-MINUS PIC X(1). 02 MSG-PRINT-CODE PIC 9(8). 02 FILLER PIC X(81) VALUE . 01 UNLOADED. 02 FILLER PIC X(28) VALUE DSNT495I SUCCESSFUL UNLOAD . 02 ROWS PIC 9(8). 02 FILLER PIC X(15) VALUE ROWS OF TABLE . 02 TABLENAM PIC X(72) VALUE . 01 BADTYPE. 02 FILLER PIC X(42) VALUE DSNT496I UNRECOGNIZED DATA TYPE CODE OF . 02 TYPCOD PIC 9(8). 02 FILLER PIC X(71) VALUE . 01 MSGRETCD. 02 FILLER PIC X(42) VALUE DSNT497I RETURN CODE FROM MESSAGE ROUTINE. 02 FILLER PIC X(9) VALUE DSNTIAR . 02 RETCODE PIC 9(8). 02 FILLER PIC X(62) VALUE .
Figure 278. Called program that does pointer manipulation (Part 4 of 10)
1037
MSGNOCOL. 02 FILLER PIC X(120) VALUE DSNT498I ERROR, NO VALID COLUMNS FOUND. 01 MSG-NOROW. 02 FILLER PIC X(120) VALUE DSNT499I NO ROWS FOUND IN TABLE OR VIEW. ***************************************************** * WORKAREAS * ***************************************************** 77 NOT-FOUND PIC S9(8) COMP VALUE +100. ***************************************************** * VARIABLES FOR ERROR-MESSAGE FORMATTING * 00 ***************************************************** 01 ERROR-MESSAGE. 02 ERROR-LEN PIC S9(4) COMP VALUE +960. 02 ERROR-TEXT PIC X(120) OCCURS 8 TIMES INDEXED BY ERROR-INDEX. 77 ERROR-TEXT-LEN PIC S9(8) COMP VALUE +120. ***************************************************** * SQL DESCRIPTOR AREA * ***************************************************** EXEC SQL INCLUDE SQLDA END-EXEC. * * DATA TYPES FOUND IN SQLTYPE, AFTER REMOVING THE NULL BIT * 77 VARCTYPE PIC S9(4) COMP VALUE +448. 77 CHARTYPE PIC S9(4) COMP VALUE +452. 77 VARLTYPE PIC S9(4) COMP VALUE +456. 77 VARGTYPE PIC S9(4) COMP VALUE +464. 77 GTYPE PIC S9(4) COMP VALUE +468. 77 LVARGTYP PIC S9(4) COMP VALUE +472. 77 FLOATYPE PIC S9(4) COMP VALUE +480. 77 DECTYPE PIC S9(4) COMP VALUE +484. 77 INTTYPE PIC S9(4) COMP VALUE +496. 77 HWTYPE PIC S9(4) COMP VALUE +500. 77 DATETYP PIC S9(4) COMP VALUE +384. 77 TIMETYP PIC S9(4) COMP VALUE +388. 77 TIMESTMP PIC S9(4) COMP VALUE +392. *
01
Figure 278. Called program that does pointer manipulation (Part 5 of 10)
1038
***************************************************** * THE REDEFINES CLAUSES BELOW ARE FOR 31-BIT ADDRESSING. * IF YOUR COMPILER SUPPORTS ONLY 24-BIT ADDRESSING, * CHANGE THE DECLARATIONS TO THESE: * 01 RECNUM REDEFINES RECPTR PICTURE S9(8) COMPUTATIONAL. * 01 IRECNUM REDEFINES IRECPTR PICTURE S9(8) COMPUTATIONAL. ***************************************************** 01 RECPTR POINTER. 01 RECNUM REDEFINES RECPTR PICTURE S9(9) COMPUTATIONAL. 01 IRECPTR POINTER. 01 IRECNUM REDEFINES IRECPTR PICTURE S9(9) COMPUTATIONAL. 01 I PICTURE S9(4) COMPUTATIONAL. 01 J PICTURE S9(4) COMPUTATIONAL. 01 DUMMY PICTURE S9(4) COMPUTATIONAL. 01 MYTYPE PICTURE S9(4) COMPUTATIONAL. 01 COLUMN-IND PICTURE S9(4) COMPUTATIONAL. 01 COLUMN-LEN PICTURE S9(4) COMPUTATIONAL. 01 COLUMN-PREC PICTURE S9(4) COMPUTATIONAL. 01 COLUMN-SCALE PICTURE S9(4) COMPUTATIONAL. 01 INDCOUNT PIC S9(4) COMPUTATIONAL. 01 ROWCOUNT PIC S9(4) COMPUTATIONAL. 01 WORKAREA2. 02 WORKINDPTR POINTER OCCURS 750 TIMES. ***************************************************** * DECLARE CURSOR AND STATEMENT FOR DYNAMIC SQL ***************************************************** * EXEC SQL DECLARE DT CURSOR FOR SEL END-EXEC. EXEC SQL DECLARE SEL STATEMENT END-EXEC. * ***************************************************** * SQL INCLUDE FOR SQLCA * ***************************************************** EXEC SQL INCLUDE SQLCA END-EXEC. * 77 ONE PIC S9(4) COMP VALUE +1. 77 TWO PIC S9(4) COMP VALUE +2. 77 FOUR PIC S9(4) COMP VALUE +4. 77 QMARK PIC X(1) VALUE ?. * LINKAGE SECTION. 01 LINKAREA-IND. 02 IND PIC S9(4) COMP OCCURS 750 TIMES. 01 LINKAREA-REC. 02 REC1-LEN PIC S9(8) COMP. 02 REC1-CHAR PIC X(1) OCCURS 1 TO 32700 TIMES DEPENDING ON REC1-LEN. 01 LINKAREA-QMARK. 02 INDREC PIC X(1). /
Figure 278. Called program that does pointer manipulation (Part 6 of 10)
1039
PROCEDURE DIVISION USING LINKAREA-IND LINKAREA-REC. * ***************************************************** * SQL RETURN CODE HANDLING * ***************************************************** EXEC SQL WHENEVER SQLERROR GOTO DBERROR END-EXEC. EXEC SQL WHENEVER SQLWARNING GOTO DBERROR END-EXEC. EXEC SQL WHENEVER NOT FOUND CONTINUE END-EXEC. * ***************************************************** * MAIN PROGRAM ROUTINE * ***************************************************** SET IRECPTR TO ADDRESS OF REC1-CHAR(1). * **OPEN FILES OPEN INPUT SYSIN OUTPUT SYSPRINT OUTPUT SYSREC01. * **WRITE HEADER WRITE MSGREC FROM HEADER AFTER ADVANCING 2 LINES. * **GET FIRST INPUT READ SYSIN RECORD INTO IOAREA. * **MAIN ROUTINE PERFORM PROCESS-INPUT THROUGH IND-RESULT. * PROG-END. * **CLOSE FILES CLOSE SYSIN SYSPRINT SYSREC01. GOBACK. / *************************************************************** * * * PERFORMED SECTION: * * PROCESSING FOR THE TABLE OR VIEW JUST READ * * * *************************************************************** PROCESS-INPUT. * MOVE TNAME TO STMTTAB. MOVE STMTBLD TO STMTCHAR. EXEC SQL PREPARE SEL INTO :SQLDA FROM :STMTBUF END-EXEC. *************************************************************** * * * SET UP ADDRESSES IN THE SQLDA FOR DATA. * * * *************************************************************** IF SQLD = ZERO THEN WRITE MSGREC FROM MSGNOCOL AFTER ADVANCING 2 LINES GO TO IND-RESULT. MOVE ZERO TO ROWCOUNT. MOVE ZERO TO REC1-LEN. SET RECPTR TO IRECPTR. MOVE ONE TO I. PERFORM COLADDR UNTIL I > SQLD.
Figure 278. Called program that does pointer manipulation (Part 7 of 10)
1040
**************************************************************** * * * SET LENGTH OF OUTPUT RECORD. * * EXEC SQL OPEN DT END-EXEC. * * DO WHILE SQLCODE IS 0. * * EXEC SQL FETCH DT USING DESCRIPTOR :SQLDA END-EXEC. * * ADD IN MARKERS TO DENOTE NULLS. * * WRITE THE DATA TO SYSREC01. * * INCREMENT DATA RECORD COUNTER. * * END. * * * **************************************************************** * **OPEN CURSOR EXEC SQL OPEN DT END-EXEC. PERFORM BLANK-REC. EXEC SQL FETCH DT USING DESCRIPTOR :SQLDA END-EXEC. * **NO ROWS FOUND * **PRINT ERROR MESSAGE IF SQLCODE = NOT-FOUND WRITE MSGREC FROM MSG-NOROW AFTER ADVANCING 2 LINES ELSE * **WRITE ROW AND * **CONTINUE UNTIL * **NO MORE ROWS PERFORM WRITE-AND-FETCH UNTIL SQLCODE IS NOT EQUAL TO ZERO. * EXEC SQL WHENEVER NOT FOUND GOTO CLOSEDT END-EXEC. * CLOSEDT. EXEC SQL CLOSE DT END-EXEC. * **************************************************************** * * * INDICATE THE RESULTS OF THE UNLOAD OPERATION. * * * **************************************************************** IND-RESULT. MOVE TNAME TO TABLENAM. MOVE ROWCOUNT TO ROWS. WRITE MSGREC FROM UNLOADED AFTER ADVANCING 2 LINES. GO TO PROG-END. * WRITE-AND-FETCH. * ADD IN MARKERS TO DENOTE NULLS. MOVE ONE TO INDCOUNT. PERFORM NULLCHK UNTIL INDCOUNT = SQLD. MOVE REC1-LEN TO REC01-LEN. WRITE REC01 FROM LINKAREA-REC. ADD ONE TO ROWCOUNT. PERFORM BLANK-REC. EXEC SQL FETCH DT USING DESCRIPTOR :SQLDA END-EXEC. * NULLCHK. IF IND(INDCOUNT) < 0 THEN SET ADDRESS OF LINKAREA-QMARK TO WORKINDPTR(INDCOUNT) MOVE QMARK TO INDREC. ADD ONE TO INDCOUNT.
Figure 278. Called program that does pointer manipulation (Part 8 of 10)
1041
***************************************************** * BLANK OUT RECORD TEXT FIRST * ***************************************************** BLANK-REC. MOVE ONE TO J. PERFORM BLANK-MORE UNTIL J > REC1-LEN. BLANK-MORE. MOVE TO REC1-CHAR(J). ADD ONE TO J. * COLADDR. SET SQLDATA(I) TO RECPTR. **************************************************************** * * DETERMINE THE LENGTH OF THIS COLUMN (COLUMN-LEN) * THIS DEPENDS ON THE DATA TYPE. MOST DATA TYPES HAVE * THE LENGTH SET, BUT VARCHAR, GRAPHIC, VARGRAPHIC, AND * DECIMAL DATA NEED TO HAVE THE BYTES CALCULATED. * THE NULL ATTRIBUTE MUST BE SEPARATED TO SIMPLIFY MATTERS. * **************************************************************** MOVE SQLLEN(I) TO COLUMN-LEN. * COLUMN-IND IS 0 FOR NO NULLS AND 1 FOR NULLS DIVIDE SQLTYPE(I) BY TWO GIVING DUMMY REMAINDER COLUMN-IND. * MYTYPE IS JUST THE SQLTYPE WITHOUT THE NULL BIT MOVE SQLTYPE(I) TO MYTYPE. SUBTRACT COLUMN-IND FROM MYTYPE. * SET THE COLUMN LENGTH, DEPENDENT ON DATA TYPE EVALUATE MYTYPE WHEN CHARTYPE CONTINUE, WHEN DATETYP CONTINUE, WHEN TIMETYP CONTINUE, WHEN TIMESTMP CONTINUE, WHEN FLOATYPE CONTINUE, WHEN VARCTYPE ADD TWO TO COLUMN-LEN, WHEN VARLTYPE ADD TWO TO COLUMN-LEN, WHEN GTYPE MULTIPLY COLUMN-LEN BY TWO GIVING COLUMN-LEN, WHEN VARGTYPE PERFORM CALC-VARG-LEN, WHEN LVARGTYP PERFORM CALC-VARG-LEN, WHEN HWTYPE MOVE TWO TO COLUMN-LEN, WHEN INTTYPE MOVE FOUR TO COLUMN-LEN, WHEN DECTYPE PERFORM CALC-DECIMAL-LEN, WHEN OTHER PERFORM UNRECOGNIZED-ERROR, END-EVALUATE. ADD COLUMN-LEN TO RECNUM. ADD COLUMN-LEN TO REC1-LEN.
Figure 278. Called program that does pointer manipulation (Part 9 of 10)
1042
**************************************************************** * * * IF THIS COLUMN CAN BE NULL, AN INDICATOR VARIABLE IS * * NEEDED. WE ALSO RESERVE SPACE IN THE OUTPUT RECORD TO * * NOTE THAT THE VALUE IS NULL. * * * **************************************************************** MOVE ZERO TO IND(I). IF COLUMN-IND = ONE THEN SET SQLIND(I) TO ADDRESS OF IND(I) SET WORKINDPTR(I) TO RECPTR ADD ONE TO RECNUM ADD ONE TO REC1-LEN. * ADD ONE TO I. * PERFORMED PARAGRAPH TO CALCULATE COLUMN LENGTH * FOR A DECIMAL DATA TYPE COLUMN CALC-DECIMAL-LEN. DIVIDE COLUMN-LEN BY 256 GIVING COLUMN-PREC REMAINDER COLUMN-SCALE. MOVE COLUMN-PREC TO COLUMN-LEN. ADD ONE TO COLUMN-LEN. DIVIDE COLUMN-LEN BY TWO GIVING COLUMN-LEN. * PERFORMED PARAGRAPH TO CALCULATE COLUMN LENGTH * FOR A VARGRAPHIC DATA TYPE COLUMN CALC-VARG-LEN. MULTIPLY COLUMN-LEN BY TWO GIVING COLUMN-LEN. ADD TWO TO COLUMN-LEN. * PERFORMED PARAGRAPH TO NOTE AN UNRECOGNIZED * DATA TYPE COLUMN UNRECOGNIZED-ERROR. * * ERROR MESSAGE FOR UNRECOGNIZED DATA TYPE * MOVE SQLTYPE(I) TO TYPCOD. WRITE MSGREC FROM BADTYPE AFTER ADVANCING 2 LINES. GO TO IND-RESULT. * ***************************************************** * SQL ERROR OCCURRED - GET MESSAGE * ***************************************************** DBERROR. * **SQL ERROR MOVE SQLCODE TO MSG-PRINT-CODE. IF SQLCODE < 0 THEN MOVE - TO MSG-MINUS. WRITE MSGREC FROM MSG-SQLERR AFTER ADVANCING 2 LINES. CALL DSNTIAR USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN. IF RETURN-CODE = ZERO PERFORM ERROR-PRINT VARYING ERROR-INDEX FROM 1 BY 1 UNTIL ERROR-INDEX GREATER THAN 8 ELSE * **ERROR FOUND IN DSNTIAR * **PRINT ERROR MESSAGE MOVE RETURN-CODE TO RETCODE WRITE MSGREC FROM MSGRETCD AFTER ADVANCING 2 LINES. GO TO PROG-END. * ***************************************************** * PRINT MESSAGE TEXT * ***************************************************** ERROR-PRINT. WRITE MSGREC FROM ERROR-TEXT (ERROR-INDEX) AFTER ADVANCING 1 LINE.
Figure 278. Called program that does pointer manipulation (Part 10 of 10)
1043
/**********************************************************************/ /* Descriptive name = Dynamic SQL sample using C language */ /* */ /* Function = To show examples of the use of dynamic and static */ /* SQL. */ /* */ /* Notes = This example assumes that the EMP and DEPT tables are */ /* defined. They need not be the same as the DB2 Sample */ /* tables. */ /* */ /* Module type = C program */ /* Processor = DB2 precompiler, C compiler */ /* Module size = see link edit */ /* Attributes = not reentrant or reusable */ /* */ /* Input = */ /* */ /* symbolic label/name = DEPT */ /* description = arbitrary table */ /* symbolic label/name = EMP */ /* description = arbitrary table */ /* */ /* Output = */ /* */ /* symbolic label/name = SYSPRINT */ /* description = print results via printf */ /* */ /* Exit-normal = return code 0 normal completion */ /* */ /* Exit-error = */ /* */ /* Return code = SQLCA */ /* */ /* Abend codes = none */ /* */ /* External references = none */ /* */ /* Control-blocks = */ /* SQLCA - sql communication area */ /* */
1044
/* Logic specification: */ /* */ /* There are four SQL sections. */ /* */ /* 1) STATIC SQL 1: using static cursor with a SELECT statement. */ /* Two output host variables. */ /* 2) Dynamic SQL 2: Fixed-list SELECT, using same SELECT statement */ /* used in SQL 1 to show the difference. The prepared string */ /* :iptstr can be assigned with other dynamic-able SQL statements. */ /* 3) Dynamic SQL 3: Insert with parameter markers. */ /* Using four parameter markers which represent four input host */ /* variables within a host structure. */ /* 4) Dynamic SQL 4: EXECUTE IMMEDIATE */ /* A GRANT statement is executed immediately by passing it to DB2 */ /* via a varying string host variable. The example shows how to */ /* set up the host variable before passing it. */ /* */ /**********************************************************************/ #include "stdio.h" #include "stdefs.h" EXEC SQL INCLUDE SQLCA; EXEC SQL INCLUDE SQLDA; EXEC SQL BEGIN DECLARE SECTION; short edlevel; struct { short len; char x1[56]; } stmtbf1, stmtbf2, inpstr; struct { short len; char x1[15]; } lname; short hv1; struct { char deptno[4]; struct { short len; char x[36]; } deptname; char mgrno[7]; char admrdept[4]; } hv2; short ind[4]; EXEC SQL END DECLARE SECTION; EXEC SQL DECLARE EMP TABLE (EMPNO CHAR(6) FIRSTNAME VARCHAR(12) MIDINIT CHAR(1) LASTNAME VARCHAR(15) WORKDEPT CHAR(3) PHONENO CHAR(4) HIREDATE DECIMAL(6) JOBCODE DECIMAL(3) EDLEVEL SMALLINT SEX CHAR(1) BIRTHDATE DECIMAL(6) SALARY DECIMAL(8,2) FORFNAME VARGRAPHIC(12) FORMNAME GRAPHIC(1) FORLNAME VARGRAPHIC(15) FORADDR VARGRAPHIC(256) )
, , , , , , , , , , , , , , , ;
1045
EXEC SQL DECLARE DEPT TABLE ( DEPTNO CHAR(3) , DEPTNAME VARCHAR(36) , MGRNO CHAR(6) , ADMRDEPT CHAR(3) ); main () { printf("??/n*** begin of program ***"); EXEC SQL WHENEVER SQLERROR GO TO HANDLERR; EXEC SQL WHENEVER SQLWARNING GO TO HANDWARN; EXEC SQL WHENEVER NOT FOUND GO TO NOTFOUND; /******************************************************************/ /* Assign values to host variables which will be input to DB2 */ /******************************************************************/ strcpy(hv2.deptno,"M92"); strcpy(hv2.deptname.x,"DDL"); hv2.deptname.len = strlen(hv2.deptname.x); strcpy(hv2.mgrno,"123456"); strcpy(hv2.admrdept,"abc"); /******************************************************************/ /* Static SQL 1: DECLARE CURSOR, OPEN, FETCH, CLOSE */ /* Select into :edlevel, :lname */ /******************************************************************/ printf("??/n*** begin declare ***"); EXEC SQL DECLARE C1 CURSOR FOR SELECT EDLEVEL, LASTNAME FROM EMP WHERE EMPNO = 000010; printf("??/n*** begin open ***"); EXEC SQL OPEN C1; printf("??/n*** begin fetch EXEC SQL FETCH C1 INTO :edlevel, :lname; printf("??/n*** returned values printf("??/n??/nedlevel = %d",edlevel); printf("??/nlname = %s\n",lname.x1); ***"); ***");
printf("??/n*** begin close ***"); EXEC SQL CLOSE C1; /******************************************************************/ /* Dynamic SQL 2: PREPARE, DECLARE CURSOR, OPEN, FETCH, CLOSE */ /* Select into :edlevel, :lname */ /******************************************************************/ sprintf (inpstr.x1, "SELECT EDLEVEL, LASTNAME FROM EMP WHERE EMPNO = 000010"); inpstr.len = strlen(inpstr.x1); printf("??/n*** begin prepare ***"); EXEC SQL PREPARE STAT1 FROM :inpstr; printf("??/n*** begin declare ***"); EXEC SQL DECLARE C2 CURSOR FOR STAT1; printf("??/n*** begin open ***"); EXEC SQL OPEN C2; printf("??/n*** begin fetch EXEC SQL FETCH C2 INTO :edlevel, :lname; printf("??/n*** returned values printf("??/n??/nedlevel = %d",edlevel); printf("??/nlname = %s\n",lname.x1); printf("??/n*** EXEC SQL CLOSE C2; begin close ***"); ***");
***");
1046
/******************************************************************/ /* Dynamic SQL 3: PREPARE with parameter markers */ /* Insert into with four values. */ /******************************************************************/ sprintf (stmtbf1.x1, "INSERT INTO DEPT VALUES (?,?,?,?)"); stmtbf1.len = strlen(stmtbf1.x1); printf("??/n*** begin prepare ***"); EXEC SQL PREPARE s1 FROM :stmtbf1; printf("??/n*** begin execute ***"); EXEC SQL EXECUTE s1 USING :hv2:ind; printf("??/n*** following are expected insert results ***"); printf("??/n hv2.deptno = %s",hv2.deptno); printf("??/n hv2.deptname.len = %d",hv2.deptname.len); printf("??/n hv2.deptname.x = %s",hv2.deptname.x); printf("??/n hv2.mgrno = %s",hv2.mgrno); printf("??/n hv2.admrdept = %s",hv2.admrdept); EXEC SQL COMMIT; /******************************************************************/ /* Dynamic SQL 4: EXECUTE IMMEDIATE */ /* Grant select */ /******************************************************************/ sprintf (stmtbf2.x1, "GRANT SELECT ON EMP TO USERX"); stmtbf2.len = strlen(stmtbf2.x1); printf("??/n*** begin execute immediate ***"); EXEC SQL EXECUTE IMMEDIATE :stmtbf2; printf("??/n*** end of program ***"); goto progend; HANDWARN: HANDLERR: NOTFOUND: ; printf("??/n SQLCODE = %d",SQLCODE); printf("??/n SQLWARN0 = %c",SQLWARN0); printf("??/n SQLWARN1 = %c",SQLWARN1); printf("??/n SQLWARN2 = %c",SQLWARN2); printf("??/n SQLWARN3 = %c",SQLWARN3); printf("??/n SQLWARN4 = %c",SQLWARN4); printf("??/n SQLWARN5 = %c",SQLWARN5); printf("??/n SQLWARN6 = %c",SQLWARN6); printf("??/n SQLWARN7 = %c",SQLWARN7); progend: ; }
DRAW syntax:
%DRAW object-name ( SSID=ssid TYPE= SELECT INSERT UPDATE LOAD
DRAW parameters:
1047
object-name The name of the table or view for which DRAW builds an SQL statement or utility control statement. The name can be a one-, two-, or three-part name. The table or view to which object-name refers must exist before DRAW can run. object-name is a required parameter. SSID=ssid Specifies the name of the local DB2 subsystem. S can be used as an abbreviation for SSID. If you invoke DRAW from the command line of the edit session in SPUFI, SSID=ssid is an optional parameter. DRAW uses the subsystem ID from the DB2I Defaults panel. TYPE=operation-type The type of statement that DRAW builds. T can be used as an abbreviation for TYPE. operation-type has one of the following values: SELECT Builds a SELECT statement in which the result table contains all columns of object-name. S can be used as an abbreviation for SELECT. INSERT Builds a template for an INSERT statement that inserts values into all columns of object-name. The template contains comments that indicate where the user can place column values. I can be used as an abbreviation for INSERT. UPDATE Builds a template for an UPDATE statement that updates columns of object-name. The template contains comments that indicate where the user can place column values and qualify the update operation for selected rows. U can be used as an abbreviation for UPDATE. LOAD Builds a template for a LOAD utility control statement for object-name. L can be used as an abbreviation for LOAD. TYPE=operation-type is an optional parameter. The default is TYPE=SELECT. DRAW data sets: Edit data set The data set from which you issue the DRAW command when you are in an ISPF edit session. If you issue the DRAW command from a SPUFI session, this data set is the data set that you specify in field 1 of the main SPUFI panel (DSNESP01). The output from the DRAW command goes into this data set. DRAW return codes: Return code 0 12 20 Meaning Successful completion. An error occurred when DRAW edited the input file. One of the following errors occurred:
1048
v No input parameters were specified. v One of the input parameters was not valid. v An SQL error occurred when the output statement was generated. Examples of DRAW invocation: Generate a SELECT statement for table DSN8810.EMP at the local subsystem. Use the default DB2I subsystem ID. The DRAW invocation is:
DRAW DSN8810.EMP (TYPE=SELECT
Generate a template for an INSERT statement that inserts values into table DSN8810.EMP at location SAN_JOSE. The local subsystem ID is DSN. The DRAW invocation is:
DRAW SAN_JOSE.DSN8810.EMP (TYPE=INSERT SSID=DSN
Generate a template for an UPDATE statement that updates values of table DSN8810.EMP. The local subsystem ID is DSN. The DRAW invocation is:
DRAW DSN8810.EMP (TYPE=UPDATE SSID=DSN
1049
, "WORKDEPT"= , "PHONENO"= , "HIREDATE"= , "JOB"= , "EDLEVEL"= , "SEX"= , "BIRTHDATE"= , "SALARY"= , "BONUS"= , "COMM"= WHERE
-----------
CHAR(3) CHAR(4) DATE CHAR(8) SMALLINT CHAR(1) DATE DECIMAL(9,2) DECIMAL(9,2) DECIMAL(9,2)
Generate a LOAD control statement to load values into table DSN8810.EMP. The local subsystem ID is DSN. The draw invocation is:
DRAW DSN8810.EMP (TYPE=LOAD SSID=DSN
1050
/* REXX ***************************************************************/ L1 = WHEREAMI() /* DRAW creates basic SQL queries by retrieving the description of a table. You must specify the name of the table or view to be queried. You can specify the type of query you want to compose. You might need to specify the name of the DB2 subsystem. >>--DRAW-----tablename-----|---------------------------|------->< |-(-|-Ssid=subsystem-name-|-| | +-Select-+ | |-Type=-|-Insert-|----| |-Update-| +--Load--+ Ssid=subsystem-name subsystem-name specified the name of a DB2 subsystem. Select Composes a basic query for selecting data from the columns of a table or view. If TYPE is not specified, SELECT is assumed. Using SELECT with the DRAW command produces a query that would retrieve all rows and all columns from the specified table. You can then modify the query as needed. A SELECT query of EMP composed by DRAW looks like this: SELECT "EMPNO" , "FIRSTNME" , "MIDINIT" , "LASTNAME" , "WORKDEPT" , "PHONENO" , "HIREDATE" , "JOB" , "EDLEVEL" , "SEX" , "BIRTHDATE" , "SALARY" , "BONUS" , "COMM" FROM DSN8810.EMP If you include a location qualifier, the query looks like this: SELECT "EMPNO" , "FIRSTNME" , "MIDINIT" , "LASTNAME" , "WORKDEPT" , "PHONENO" , "HIREDATE" , "JOB" , "EDLEVEL" , "SEX" , "BIRTHDATE" , "SALARY" , "BONUS" , "COMM" FROM STLEC1.DSN8810.EMP Figure 280. REXX sample program DRAW (Part 1 of 10)
1051
To use this SELECT query, type the other clauses you need. If you are selecting from more than one table, use a DRAW command for each table name you want represented. Insert Composes a basic query to insert data into the columns of a table or view. The following example shows an INSERT query of EMP that DRAW composed: INSERT INTO DSN8810.EMP ( "EMPNO" , "FIRSTNME" , "MIDINIT" , "LASTNAME" , "WORKDEPT" , "PHONENO" , "HIREDATE" , "JOB" , "EDLEVEL" , "SEX" , "BIRTHDATE" , "SALARY" , "BONUS" , "COMM" ) VALUES ( -- ENTER VALUES BELOW COLUMN NAME DATA TYPE , -- EMPNO CHAR(6) NOT NULL , -- FIRSTNME VARCHAR(12) NOT NULL , -- MIDINIT CHAR(1) NOT NULL , -- LASTNAME VARCHAR(15) NOT NULL , -- WORKDEPT CHAR(3) , -- PHONENO CHAR(4) , -- HIREDATE DATE , -- JOB CHAR(8) , -- EDLEVEL SMALLINT , -- SEX CHAR(1) , -- BIRTHDATE DATE , -- SALARY DECIMAL(9,2) , -- BONUS DECIMAL(9,2) ) -- COMM DECIMAL(9,2) To insert values into EMP, type values to the left of the column names. See DB2 SQL Reference for more information on INSERT queries. Update Composes a basic query to change the data in a table or view. The following example shows an UPDATE query of EMP composed by DRAW: Figure 280. REXX sample program DRAW (Part 2 of 10)
1052
UPDATE DSN8810.EMP SET -- COLUMN NAME ENTER VALUES BELOW DATA TYPE "EMPNO"= -- CHAR(6) NOT NULL , "FIRSTNME"= -- VARCHAR(12) NOT NULL , "MIDINIT"= -- CHAR(1) NOT NULL , "LASTNAME"= -- VARCHAR(15) NOT NULL , "WORKDEPT"= -- CHAR(3) , "PHONENO"= -- CHAR(4) , "HIREDATE"= -- DATE , "JOB"= -- CHAR(8) , "EDLEVEL"= -- SMALLINT , "SEX"= -- CHAR(1) , "BIRTHDATE"= -- DATE , "SALARY"= -- DECIMAL(9,2) , "BONUS"= -- DECIMAL(9,2) , "COMM"= -- DECIMAL(9,2) WHERE To use this UPDATE query, type the changes you want to make to the right of the column names, and delete the lines you dont need. Be sure to complete the WHERE clause. For information on writing queries to update data, refer to DB2 SQL Reference. Load Composes a load statement to load the data in a table. The following example shows a LOAD statement of EMP composed by DRAW: LOAD DATA INDDN SYSREC INTO TABLE DSN8810 .EMP ( "EMPNO" POSITION( 1) CHAR(6) , "FIRSTNME" POSITION( 8) VARCHAR , "MIDINIT" POSITION( 21) CHAR(1) , "LASTNAME" POSITION( 23) VARCHAR , "WORKDEPT" POSITION( 39) CHAR(3) NULLIF( 39)=? , "PHONENO" POSITION( 43) CHAR(4) NULLIF( 43)=? , "HIREDATE" POSITION( 48) DATE EXTERNAL NULLIF( 48)=? , "JOB" POSITION( 59) CHAR(8) NULLIF( 59)=? , "EDLEVEL" POSITION( 68) SMALLINT NULLIF( 68)=? , "SEX" POSITION( 71) CHAR(1) NULLIF( 71)=? , "BIRTHDATE" POSITION( 73) DATE EXTERNAL NULLIF( 73)=? , "SALARY" POSITION( 84) DECIMAL EXTERNAL(9,2) NULLIF( 84)=? , "BONUS" POSITION( 90) DECIMAL EXTERNAL(9,2) NULLIF( 90)=? , "COMM" POSITION( 96) DECIMAL EXTERNAL(9,2) NULLIF( 96)=? ) Figure 280. REXX sample program DRAW (Part 3 of 10)
1053
To use this LOAD statement, type the changes you want to make, and delete the lines you dont need. For information on writing queries to update data, refer to DB2 Utility Guide and Reference. */ L2 = WHEREAMI() /**********************************************************************/ /* TRACE ?R */ /**********************************************************************/ Address ISPEXEC "ISREDIT MACRO (ARGS) NOPROCESS" If ARGS = "" Then Do Do I = L1+2 To L2-2;Say SourceLine(I);End Exit (20) End Parse Upper Var Args Table "(" Parms Parms = Translate(Parms," ",",") Type = "SELECT" /* Default */ SSID = "" /* Default */ "VGET (DSNEOV01)" If RC = 0 Then SSID = DSNEOV01 If (Parms <> "") Then Do Until(Parms = "") Parse Var Parms Var "=" Value Parms If Var = "T" | Var = "TYPE" Then Type = Value Else If Var = "S" | Var = "SSID" Then SSID = Value Else Exit (20) End "CONTROL ERRORS RETURN" "ISREDIT (LEFTBND,RIGHTBND) = BOUNDS" "ISREDIT (LRECL) = DATA_WIDTH" /*LRECL*/ BndSize = RightBnd - LeftBnd + 1 If BndSize > 72 Then BndSize = 72 "ISREDIT PROCESS DEST" Select When rc = 0 Then ISREDIT (ZDEST) = LINENUM .ZDEST When rc <= 8 Then /* No A or B entered */ Do zedsmsg = Enter "A"/"B" line cmd zedlmsg = DRAW requires an "A" or "B" line command SETMSG MSG(ISRZ001) Exit 12 End When rc < 20 Then /* Conflicting line commands - edit sets message */ Exit 12 When rc = 20 Then zdest = 0 Otherwise Exit 12 End Figure 280. REXX sample program DRAW (Part 4 of 10)
1054
SQLTYPE. = "UNKNOWN TYPE" VCHTYPE = 448; SQLTYPES.VCHTYPE = VARCHAR CHTYPE = 452; SQLTYPES.CHTYPE = CHAR LVCHTYPE = 456; SQLTYPES.LVCHTYPE = VARCHAR VGRTYP = 464; SQLTYPES.VGRTYP = VARGRAPHIC GRTYP = 468; SQLTYPES.GRTYP = GRAPHIC LVGRTYP = 472; SQLTYPES.LVGRTYP = VARGRAPHIC FLOTYPE = 480; SQLTYPES.FLOTYPE = FLOAT DCTYPE = 484; SQLTYPES.DCTYPE = DECIMAL INTYPE = 496; SQLTYPES.INTYPE = INTEGER SMTYPE = 500; SQLTYPES.SMTYPE = SMALLINT DATYPE = 384; SQLTYPES.DATYPE = DATE TITYPE = 388; SQLTYPES.TITYPE = TIME TSTYPE = 392; SQLTYPES.TSTYPE = TIMESTAMP Address TSO "SUBCOM DSNREXX" /* HOST CMD ENV AVAILABLE? */ IF RC THEN /* NO, LETS MAKE ONE */ S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) /* ADD HOST CMD ENV */ Address DSNREXX "CONNECT" SSID If SQLCODE ^= 0 Then Call SQLCA Address DSNREXX "EXECSQL DESCRIBE TABLE :TABLE INTO :SQLDA" If SQLCODE ^= 0 Then Call SQLCA Address DSNREXX "EXECSQL COMMIT" Address DSNREXX "DISCONNECT" If SQLCODE ^= 0 Then Call SQLCA Select When (Left(Type,1) = "S") Then Call DrawSelect When (Left(Type,1) = "I") Then Call DrawInsert When (Left(Type,1) = "U") Then Call DrawUpdate When (Left(Type,1) = "L") Then Call DrawLoad Otherwise EXIT (20) End Do I = LINE.0 To 1 By -1 LINE = COPIES(" ",LEFTBND-1)||LINE.I ISREDIT LINE_AFTER zdest = DATALINE (Line) End line1 = zdest + 1 ISREDIT CURSOR = line1 0 Exit Figure 280. REXX sample program DRAW (Part 5 of 10)
1055
/**********************************************************************/ WHEREAMI:; RETURN SIGL /**********************************************************************/ /* Draw SELECT */ /**********************************************************************/ DrawSelect: Line.0 = 0 Line = "SELECT" Do I = 1 To SQLDA.SQLD If I > 1 Then Line = Line , ColName = "SQLDA.I.SQLNAME" Null = SQLDA.I.SQLTYPE//2 If Length(Line)+Length(ColName)+LENGTH(" ,") > BndSize THEN Do L = Line.0 + 1; Line.0 = L Line.L = Line Line = " " End Line = Line ColName End I If Line ^= "" Then Do L = Line.0 + 1; Line.0 = L Line.L = Line Line = " " End L = Line.0 + 1; Line.0 = L Line.L = "FROM" TABLE Return /**********************************************************************/ /* Draw INSERT */ /**********************************************************************/ DrawInsert: Line.0 = 0 Line = "INSERT INTO" TABLE "(" Do I = 1 To SQLDA.SQLD If I > 1 Then Line = Line , ColName = "SQLDA.I.SQLNAME" If Length(Line)+Length(ColName) > BndSize THEN Do L = Line.0 + 1; Line.0 = L Line.L = Line Line = " " End Line = Line ColName If I = SQLDA.SQLD Then Line = Line ) End I If Line ^= "" Then Do L = Line.0 + 1; Line.0 = L Line.L = Line Line = " " End Figure 280. REXX sample program DRAW (Part 6 of 10)
1056
L = Line.0 + 1; Line.0 = L Line.L = " VALUES (" L = Line.0 + 1; Line.0 = L Line.L = , "-- ENTER VALUES BELOW COLUMN NAME DATA TYPE" Do I = 1 To SQLDA.SQLD If SQLDA.SQLD > 1 & I < SQLDA.SQLD Then Line = " , --" Else Line = " ) --" Line = Line Left(SQLDA.I.SQLNAME,18) Type = SQLDA.I.SQLTYPE Null = Type//2 If Null Then Type = Type - 1 Len = SQLDA.I.SQLLEN Prcsn = SQLDA.I.SQLLEN.SQLPRECISION Scale = SQLDA.I.SQLLEN.SQLSCALE Select When (Type = CHTYPE , |Type = VCHTYPE , |Type = LVCHTYPE , |Type = GRTYP , |Type = VGRTYP , |Type = LVGRTYP ) THEN Type = SQLTYPES.Type"("STRIP(LEN)")" When (Type = FLOTYPE ) THEN Type = SQLTYPES.Type"("STRIP((LEN*4)-11) ")" When (Type = DCTYPE ) THEN Type = SQLTYPES.Type"("STRIP(PRCSN)","STRIP(SCALE)")" Otherwise Type = SQLTYPES.Type End Line = Line Type If Null = 0 Then Line = Line "NOT NULL" L = Line.0 + 1; Line.0 = L Line.L = Line End I Return Figure 280. REXX sample program DRAW (Part 7 of 10)
1057
/**********************************************************************/ /* Draw UPDATE */ /**********************************************************************/ DrawUpdate: Line.0 = 1 Line.1 = "UPDATE" TABLE "SET" L = Line.0 + 1; Line.0 = L Line.L = , "-- COLUMN NAME ENTER VALUES BELOW DATA TYPE" Do I = 1 To SQLDA.SQLD If I = 1 Then Line = " " Else Line = " ," Line = Line Left("SQLDA.I.SQLNAME"=,21) Line = Line Left(" ",20) Type = SQLDA.I.SQLTYPE Null = Type//2 If Null Then Type = Type - 1 Len = SQLDA.I.SQLLEN Prcsn = SQLDA.I.SQLLEN.SQLPRECISION Scale = SQLDA.I.SQLLEN.SQLSCALE Select When (Type = CHTYPE , |Type = VCHTYPE , |Type = LVCHTYPE , |Type = GRTYP , |Type = VGRTYP , |Type = LVGRTYP ) THEN Type = SQLTYPES.Type"("STRIP(LEN)")" When (Type = FLOTYPE ) THEN Type = SQLTYPES.Type"("STRIP((LEN*4)-11) ")" When (Type = DCTYPE ) THEN Type = SQLTYPES.Type"("STRIP(PRCSN)","STRIP(SCALE)")" Otherwise Type = SQLTYPES.Type End Line = Line "--" Type If Null = 0 Then Line = Line "NOT NULL" L = Line.0 + 1; Line.0 = L Line.L = Line End I L = Line.0 + 1; Line.0 = L Line.L = "WHERE" Return Figure 280. REXX sample program DRAW (Part 8 of 10)
1058
/**********************************************************************/ /* Draw LOAD */ /**********************************************************************/ DrawLoad: Line.0 = 1 Line.1 = "LOAD DATA INDDN SYSREC INTO TABLE" TABLE Position = 1 Do I = 1 To SQLDA.SQLD If I = 1 Then Line = " (" Else Line = " ," Line = Line Left("SQLDA.I.SQLNAME",20) Line = Line "POSITION("RIGHT(POSITION,5)")" Type = SQLDA.I.SQLTYPE Null = Type//2 If Null Then Type = Type - 1 Len = SQLDA.I.SQLLEN Prcsn = SQLDA.I.SQLLEN.SQLPRECISION Scale = SQLDA.I.SQLLEN.SQLSCALE Select When (Type = CHTYPE , |Type = GRTYP ) THEN Type = SQLTYPES.Type"("STRIP(LEN)")" When (Type = FLOTYPE ) THEN Type = SQLTYPES.Type"("STRIP((LEN*4)-11) ")" When (Type = DCTYPE ) THEN Do Type = SQLTYPES.Type "EXTERNAL" Type = Type"("STRIP(PRCSN)","STRIP(SCALE)")" Len = (PRCSN+2)%2 End When (Type = DATYPE , |Type = TITYPE , |Type = TSTYPE ) THEN Type = SQLTYPES.Type "EXTERNAL" Otherwise Type = SQLTYPES.Type End If (Type = GRTYP , |Type = VGRTYP , |Type = LVGRTYP ) THEN Len = Len * 2 If (Type = VCHTYPE , |Type = LVCHTYPE , |Type = VGRTYP , |Type = LVGRTYP ) THEN Len = Len + 2 Line = Line Type L = Line.0 + 1; Line.0 = L Figure 280. REXX sample program DRAW (Part 9 of 10)
1059
Line.L = Line If Null = 1 Then Do Line = " " Line = Line Left(,20) Line = Line " NULLIF("RIGHT(POSITION,5)")=?" L = Line.0 + 1; Line.0 = L Line.L = Line End Position = Position + Len + 1 End I L = Line.0 + 1; Line.0 = L Line.L = " )" Return /**********************************************************************/ /* Display SQLCA */ /**********************************************************************/ SQLCA: "ISREDIT LINE_AFTER "zdest" = MSGLINE SQLSTATE="SQLSTATE"" "ISREDIT LINE_AFTER "zdest" = MSGLINE SQLWARN ="SQLWARN.0",", || SQLWARN.1",", || SQLWARN.2",", || SQLWARN.3",", || SQLWARN.4",", || SQLWARN.5",", || SQLWARN.6",", || SQLWARN.7",", || SQLWARN.8",", || SQLWARN.9",", || SQLWARN.10"" "ISREDIT LINE_AFTER "zdest" = MSGLINE SQLERRD ="SQLERRD.1",", || SQLERRD.2",", || SQLERRD.3",", || SQLERRD.4",", || SQLERRD.5",", || SQLERRD.6"" "ISREDIT LINE_AFTER "zdest" = MSGLINE SQLERRP ="SQLERRP"" "ISREDIT LINE_AFTER "zdest" = MSGLINE SQLERRMC ="SQLERRMC"" "ISREDIT LINE_AFTER "zdest" = MSGLINE SQLCODE ="SQLCODE"" Exit 20 Figure 280. REXX sample program DRAW (Part 10 of 10)
1060
Figure 281. Sample COBOL two-phase commit application for DRDA access (Part 1 of 8)
1061
* PSEUDOCODE * * MAINLINE. * Perform CONNECT-TO-SITE-1 to establish * a connection to the local connection. * If the previous operation was successful Then * Do. * | Perform PROCESS-CURSOR-SITE-1 to obtain the * | information about an employee that is * | transferring to another location. * | If the information about the employee was obtained * | successfully Then * | Do. * | | Perform UPDATE-ADDRESS to update the information * | | to contain current information about the * | | employee. * | | Perform CONNECT-TO-SITE-2 to establish * | | a connection to the site where the employee is * | | transferring to. * | | If the connection is established successfully * | | Then * | | Do. * | | | Perform PROCESS-SITE-2 to insert the * | | | employee information at the location * | | | where the employee is transferring to. * | | End if the connection was established * | | successfully. * | End if the employee information was obtained * | successfully. * End if the previous operation was successful. * Perform COMMIT-WORK to COMMIT the changes made to STLEC1 * and STLEC2. * * PROG-END. * Close the printer. * Return. * * CONNECT-TO-SITE-1. * Provide a text description of the following step. * Establish a connection to the location where the * employee is transferring from. * Print the SQLCA out. * * PROCESS-CURSOR-SITE-1. * Provide a text description of the following step. * Open a cursor that will be used to retrieve information * about the transferring employee from this site. * Print the SQLCA out. * If the cursor was opened successfully Then * Do. * | Perform FETCH-DELETE-SITE-1 to retrieve and * | delete the information about the transferring * | employee from this site. * | Perform CLOSE-CURSOR-SITE-1 to close the cursor. * End if the cursor was opened successfully. *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Figure 281. Sample COBOL two-phase commit application for DRDA access (Part 2 of 8)
1062
* FETCH-DELETE-SITE-1. * * Provide a text description of the following step. * * Fetch information about the transferring employee. * * Print the SQLCA out. * * If the information was retrieved successfully Then * * Do. * * | Perform DELETE-SITE-1 to delete the employee * * | at this site. * * End if the information was retrieved successfully. * * * * DELETE-SITE-1. * * Provide a text description of the following step. * * Delete the information about the transferring employee * * from this site. * * Print the SQLCA out. * * * * CLOSE-CURSOR-SITE-1. * * Provide a text description of the following step. * * Close the cursor used to retrieve information about * * the transferring employee. * * Print the SQLCA out. * * * * UPDATE-ADDRESS. * * Update the address of the employee. * * Update the city of the employee. * * Update the location of the employee. * * * * CONNECT-TO-SITE-2. * * Provide a text description of the following step. * * Establish a connection to the location where the * * employee is transferring to. * * Print the SQLCA out. * * * * PROCESS-SITE-2. * * Provide a text description of the following step. * * Insert the employee information at the location where * * the employee is being transferred to. * * Print the SQLCA out. * * * * COMMIT-WORK. * * COMMIT all the changes made to STLEC1 and STLEC2. * * * ***************************************************************** ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT PRINTER, ASSIGN TO S-OUT1. DATA DIVISION. FILE SECTION. FD PRINTER RECORD CONTAINS 120 CHARACTERS DATA RECORD IS PRT-TC-RESULTS LABEL RECORD IS OMITTED. 01 PRT-TC-RESULTS. 03 PRT-BLANK PIC X(120).
Figure 281. Sample COBOL two-phase commit application for DRDA access (Part 3 of 8)
1063
WORKING-STORAGE SECTION. ***************************************************************** * Variable declarations * ***************************************************************** 01 H-EMPTBL. 05 H-EMPNO PIC X(6). 05 H-NAME. 49 H-NAME-LN PIC S9(4) COMP-4. 49 H-NAME-DA PIC X(32). 05 H-ADDRESS. 49 H-ADDRESS-LN PIC S9(4) COMP-4. 49 H-ADDRESS-DA PIC X(36). 05 H-CITY. 49 H-CITY-LN PIC S9(4) COMP-4. 49 H-CITY-DA PIC X(36). 05 H-EMPLOC PIC X(4). 05 H-SSNO PIC X(11). 05 H-BORN PIC X(10). 05 H-SEX PIC X(1). 05 H-HIRED PIC X(10). 05 H-DEPTNO PIC X(3). 05 H-JOBCODE PIC S9(3)V COMP-3. 05 H-SRATE PIC S9(5) COMP. 05 H-EDUC PIC S9(5) COMP. 05 H-SAL PIC S9(6)V9(2) COMP-3. 05 H-VALIDCHK PIC S9(6)V COMP-3. 01 H-EMPTBL-IND-TABLE. 02 H-EMPTBL-IND PIC S9(4) COMP OCCURS 15 TIMES.
***************************************************************** * Includes for the variables used in the COBOL standard * * language procedures and the SQLCA. * ***************************************************************** EXEC SQL INCLUDE COBSVAR END-EXEC. EXEC SQL INCLUDE SQLCA END-EXEC. ***************************************************************** * Declaration for the table that contains employee information * ***************************************************************** EXEC SQL DECLARE SYSADM.EMP TABLE (EMPNO CHAR(6) NOT NULL, NAME VARCHAR(32), ADDRESS VARCHAR(36) , CITY VARCHAR(36) , EMPLOC CHAR(4) NOT NULL, SSNO CHAR(11), BORN DATE, SEX CHAR(1), HIRED CHAR(10), DEPTNO CHAR(3) NOT NULL, JOBCODE DECIMAL(3), SRATE SMALLINT, EDUC SMALLINT,
Figure 281. Sample COBOL two-phase commit application for DRDA access (Part 4 of 8)
1064
SAL DECIMAL(8,2) NOT NULL, VALCHK DECIMAL(6)) END-EXEC. ***************************************************************** * Constants * ***************************************************************** 77 77 77 77 77 SITE-1 SITE-2 TEMP-EMPNO TEMP-ADDRESS-LN TEMP-CITY-LN PIC PIC PIC PIC PIC X(16) X(16) X(6) 99 99 VALUE VALUE VALUE VALUE VALUE STLEC1. STLEC2. 080000. 15. 18.
***************************************************************** * Declaration of the cursor that will be used to retrieve * * information about a transferring employee * ***************************************************************** EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, NAME, ADDRESS, CITY, EMPLOC, SSNO, BORN, SEX, HIRED, DEPTNO, JOBCODE, SRATE, EDUC, SAL, VALCHK FROM SYSADM.EMP WHERE EMPNO = :TEMP-EMPNO END-EXEC. PROCEDURE DIVISION. A101-HOUSE-KEEPING. OPEN OUTPUT PRINTER. ***************************************************************** * An employee is transferring from location STLEC1 to STLEC2. * * Retrieve information about the employee from STLEC1, delete * * the employee from STLEC1 and insert the employee at STLEC2 * * using the information obtained from STLEC1. * ***************************************************************** MAINLINE. PERFORM CONNECT-TO-SITE-1 IF SQLCODE IS EQUAL TO 0 PERFORM PROCESS-CURSOR-SITE-1 IF SQLCODE IS EQUAL TO 0 PERFORM UPDATE-ADDRESS PERFORM CONNECT-TO-SITE-2 IF SQLCODE IS EQUAL TO 0 PERFORM PROCESS-SITE-2. PERFORM COMMIT-WORK.
Figure 281. Sample COBOL two-phase commit application for DRDA access (Part 5 of 8)
1065
PROG-END. CLOSE PRINTER. GOBACK. ***************************************************************** * Establish a connection to STLEC1 * ***************************************************************** CONNECT-TO-SITE-1. MOVE CONNECT TO STLEC1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL CONNECT TO :SITE-1 END-EXEC. PERFORM PTSQLCA. ***************************************************************** * When a connection has been established successfully at STLEC1,* * open the cursor that will be used to retrieve information * * about the transferring employee. * ***************************************************************** PROCESS-CURSOR-SITE-1. MOVE OPEN CURSOR C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL OPEN C1 END-EXEC. PERFORM PTSQLCA. IF SQLCODE IS EQUAL TO ZERO PERFORM FETCH-DELETE-SITE-1 PERFORM CLOSE-CURSOR-SITE-1. ***************************************************************** * Retrieve information about the transferring employee. * * Provided that the employee exists, perform DELETE-SITE-1 to * * delete the employee from STLEC1. * ***************************************************************** FETCH-DELETE-SITE-1. MOVE FETCH C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL FETCH C1 INTO :H-EMPTBL:H-EMPTBL-IND END-EXEC. PERFORM PTSQLCA. IF SQLCODE IS EQUAL TO ZERO PERFORM DELETE-SITE-1.
Figure 281. Sample COBOL two-phase commit application for DRDA access (Part 6 of 8)
1066
***************************************************************** * Delete the employee from STLEC1. * ***************************************************************** DELETE-SITE-1. MOVE DELETE EMPLOYEE TO STNAME WRITE PRT-TC-RESULTS FROM STNAME MOVE DELETE EMPLOYEE TO STNAME EXEC SQL DELETE FROM SYSADM.EMP WHERE EMPNO = :TEMP-EMPNO END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Close the cursor used to retrieve information about the * * transferring employee. * ***************************************************************** CLOSE-CURSOR-SITE-1. MOVE CLOSE CURSOR C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL CLOSE C1 END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Update certain employee information in order to make it * * current. * ***************************************************************** UPDATE-ADDRESS. MOVE TEMP-ADDRESS-LN MOVE 1500 NEW STREET MOVE TEMP-CITY-LN MOVE NEW CITY, CA 97804 MOVE SJCA TO TO TO TO TO H-ADDRESS-LN. H-ADDRESS-DA. H-CITY-LN. H-CITY-DA. H-EMPLOC.
***************************************************************** * Establish a connection to STLEC2 * ***************************************************************** CONNECT-TO-SITE-2. MOVE CONNECT TO STLEC2 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL CONNECT TO :SITE-2 END-EXEC. PERFORM PTSQLCA.
Figure 281. Sample COBOL two-phase commit application for DRDA access (Part 7 of 8)
1067
***************************************************************** * Using the employee information that was retrieved from STLEC1 * * and updated previously, insert the employee at STLEC2. ***************************************************************** PROCESS-SITE-2. MOVE INSERT EMPLOYEE TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL INSERT INTO SYSADM.EMP VALUES (:H-EMPNO, :H-NAME, :H-ADDRESS, :H-CITY, :H-EMPLOC, :H-SSNO, :H-BORN, :H-SEX, :H-HIRED, :H-DEPTNO, :H-JOBCODE, :H-SRATE, :H-EDUC, :H-SAL, :H-VALIDCHK) END-EXEC. PERFORM PTSQLCA. ***************************************************************** * COMMIT any changes that were made at STLEC1 and STLEC2. * ***************************************************************** COMMIT-WORK. MOVE COMMIT WORK TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL COMMIT END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Include COBOL standard language procedures * ***************************************************************** INCLUDE-SUBS. EXEC SQL INCLUDE COBSSUB END-EXEC.
Figure 281. Sample COBOL two-phase commit application for DRDA access (Part 8 of 8)
1068
Figure 282. Sample COBOL two-phase commit application for DB2 private protocol access (Part 1 of 7)
1069
* * PSEUDOCODE * * MAINLINE. * Perform PROCESS-CURSOR-SITE-1 to obtain the information * about an employee that is transferring to another * location. * If the information about the employee was obtained * successfully Then * Do. * | Perform UPDATE-ADDRESS to update the information to * | contain current information about the employee. * | Perform PROCESS-SITE-2 to insert the employee * | information at the location where the employee is * | transferring to. * End if the employee information was obtained * successfully. * Perform COMMIT-WORK to COMMIT the changes made to STLEC1 * and STLEC2. * * PROG-END. * Close the printer. * Return. * * PROCESS-CURSOR-SITE-1. * Provide a text description of the following step. * Open a cursor that will be used to retrieve information * about the transferring employee from this site. * Print the SQLCA out. * If the cursor was opened successfully Then * Do. * | Perform FETCH-DELETE-SITE-1 to retrieve and * | delete the information about the transferring * | employee from this site. * | Perform CLOSE-CURSOR-SITE-1 to close the cursor. * End if the cursor was opened successfully. * * FETCH-DELETE-SITE-1. * Provide a text description of the following step. * Fetch information about the transferring employee. * Print the SQLCA out. * If the information was retrieved successfully Then * Do. * | Perform DELETE-SITE-1 to delete the employee * | at this site. * End if the information was retrieved successfully. * * DELETE-SITE-1. * Provide a text description of the following step. * Delete the information about the transferring employee * from this site. * Print the SQLCA out. * * CLOSE-CURSOR-SITE-1. * Provide a text description of the following step. * Close the cursor used to retrieve information about * the transferring employee. * Print the SQLCA out. *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Figure 282. Sample COBOL two-phase commit application for DB2 private protocol access (Part 2 of 7)
1070
* UPDATE-ADDRESS. * * Update the address of the employee. * * Update the city of the employee. * * Update the location of the employee. * * * * PROCESS-SITE-2. * * Provide a text description of the following step. * * Insert the employee information at the location where * * the employee is being transferred to. * * Print the SQLCA out. * * * * COMMIT-WORK. * * COMMIT all the changes made to STLEC1 and STLEC2. * * * ***************************************************************** ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT PRINTER, ASSIGN TO S-OUT1. DATA DIVISION. FILE SECTION. FD PRINTER RECORD CONTAINS 120 CHARACTERS DATA RECORD IS PRT-TC-RESULTS LABEL RECORD IS OMITTED. 01 PRT-TC-RESULTS. 03 PRT-BLANK PIC X(120). WORKING-STORAGE SECTION. ***************************************************************** * Variable declarations * ***************************************************************** 01 H-EMPTBL. 05 H-EMPNO PIC X(6). 05 H-NAME. 49 H-NAME-LN PIC S9(4) COMP-4. 49 H-NAME-DA PIC X(32). 05 H-ADDRESS. 49 H-ADDRESS-LN PIC S9(4) COMP-4. 49 H-ADDRESS-DA PIC X(36). 05 H-CITY. 49 H-CITY-LN PIC S9(4) COMP-4. 49 H-CITY-DA PIC X(36). 05 H-EMPLOC PIC X(4). 05 H-SSNO PIC X(11). 05 H-BORN PIC X(10). 05 H-SEX PIC X(1). 05 H-HIRED PIC X(10). 05 H-DEPTNO PIC X(3). 05 H-JOBCODE PIC S9(3)V COMP-3. 05 H-SRATE PIC S9(5) COMP. 05 H-EDUC PIC S9(5) COMP. 05 H-SAL PIC S9(6)V9(2) COMP-3. 05 H-VALIDCHK PIC S9(6)V COMP-3.
Figure 282. Sample COBOL two-phase commit application for DB2 private protocol access (Part 3 of 7)
1071
01 H-EMPTBL-IND-TABLE. 02 H-EMPTBL-IND
***************************************************************** * Includes for the variables used in the COBOL standard * * language procedures and the SQLCA. * ***************************************************************** EXEC SQL INCLUDE COBSVAR END-EXEC. EXEC SQL INCLUDE SQLCA END-EXEC. ***************************************************************** * Declaration for the table that contains employee information * ***************************************************************** EXEC SQL DECLARE SYSADM.EMP TABLE (EMPNO CHAR(6) NOT NULL, NAME VARCHAR(32), ADDRESS VARCHAR(36) , CITY VARCHAR(36) , EMPLOC CHAR(4) NOT NULL, SSNO CHAR(11), BORN DATE, SEX CHAR(1), HIRED CHAR(10), DEPTNO CHAR(3) NOT NULL, JOBCODE DECIMAL(3), SRATE SMALLINT, EDUC SMALLINT, SAL DECIMAL(8,2) NOT NULL, VALCHK DECIMAL(6)) END-EXEC. ***************************************************************** * Constants * ***************************************************************** 77 TEMP-EMPNO 77 TEMP-ADDRESS-LN 77 TEMP-CITY-LN PIC X(6) VALUE 080000. PIC 99 VALUE 15. PIC 99 VALUE 18.
***************************************************************** * Declaration of the cursor that will be used to retrieve * * information about a transferring employee * ***************************************************************** EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, NAME, ADDRESS, CITY, EMPLOC, SSNO, BORN, SEX, HIRED, DEPTNO, JOBCODE, SRATE, EDUC, SAL, VALCHK FROM STLEC1.SYSADM.EMP WHERE EMPNO = :TEMP-EMPNO END-EXEC.
Figure 282. Sample COBOL two-phase commit application for DB2 private protocol access (Part 4 of 7)
1072
PROCEDURE DIVISION. A101-HOUSE-KEEPING. OPEN OUTPUT PRINTER. ***************************************************************** * An employee is transferring from location STLEC1 to STLEC2. * * Retrieve information about the employee from STLEC1, delete * * the employee from STLEC1 and insert the employee at STLEC2 * * using the information obtained from STLEC1. * ***************************************************************** MAINLINE. PERFORM PROCESS-CURSOR-SITE-1 IF SQLCODE IS EQUAL TO 0 PERFORM UPDATE-ADDRESS PERFORM PROCESS-SITE-2. PERFORM COMMIT-WORK. PROG-END. CLOSE PRINTER. GOBACK. ***************************************************************** * Open the cursor that will be used to retrieve information * * about the transferring employee. * ***************************************************************** PROCESS-CURSOR-SITE-1. MOVE OPEN CURSOR C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL OPEN C1 END-EXEC. PERFORM PTSQLCA. IF SQLCODE IS EQUAL TO ZERO PERFORM FETCH-DELETE-SITE-1 PERFORM CLOSE-CURSOR-SITE-1. ***************************************************************** * Retrieve information about the transferring employee. * * Provided that the employee exists, perform DELETE-SITE-1 to * * delete the employee from STLEC1. * ***************************************************************** FETCH-DELETE-SITE-1. MOVE FETCH C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL FETCH C1 INTO :H-EMPTBL:H-EMPTBL-IND END-EXEC.
Figure 282. Sample COBOL two-phase commit application for DB2 private protocol access (Part 5 of 7)
1073
PERFORM PTSQLCA. IF SQLCODE IS EQUAL TO ZERO PERFORM DELETE-SITE-1. ***************************************************************** * Delete the employee from STLEC1. * ***************************************************************** DELETE-SITE-1. MOVE DELETE EMPLOYEE TO STNAME WRITE PRT-TC-RESULTS FROM STNAME MOVE DELETE EMPLOYEE TO STNAME EXEC SQL DELETE FROM STLEC1.SYSADM.EMP WHERE EMPNO = :TEMP-EMPNO END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Close the cursor used to retrieve information about the * * transferring employee. * ***************************************************************** CLOSE-CURSOR-SITE-1. MOVE CLOSE CURSOR C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL CLOSE C1 END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Update certain employee information in order to make it * * current. * ***************************************************************** UPDATE-ADDRESS. MOVE TEMP-ADDRESS-LN MOVE 1500 NEW STREET MOVE TEMP-CITY-LN MOVE NEW CITY, CA 97804 MOVE SJCA TO TO TO TO TO H-ADDRESS-LN. H-ADDRESS-DA. H-CITY-LN. H-CITY-DA. H-EMPLOC.
Figure 282. Sample COBOL two-phase commit application for DB2 private protocol access (Part 6 of 7)
1074
***************************************************************** * Using the employee information that was retrieved from STLEC1 * * and updated previously, insert the employee at STLEC2. ***************************************************************** PROCESS-SITE-2. MOVE INSERT EMPLOYEE TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL INSERT INTO STLEC2.SYSADM.EMP VALUES (:H-EMPNO, :H-NAME, :H-ADDRESS, :H-CITY, :H-EMPLOC, :H-SSNO, :H-BORN, :H-SEX, :H-HIRED, :H-DEPTNO, :H-JOBCODE, :H-SRATE, :H-EDUC, :H-SAL, :H-VALIDCHK) END-EXEC. PERFORM PTSQLCA. ***************************************************************** * COMMIT any changes that were made at STLEC1 and STLEC2. * ***************************************************************** COMMIT-WORK. MOVE COMMIT WORK TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL COMMIT END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Include COBOL standard language procedures * ***************************************************************** INCLUDE-SUBS. EXEC SQL INCLUDE COBSSUB END-EXEC.
Figure 282. Sample COBOL two-phase commit application for DB2 private protocol access (Part 7 of 7)
1075
#include <stdio.h> #include <stdlib.h> #include <string.h> main() { /************************************************************/ /* Include the SQLCA and SQLDA */ /************************************************************/ EXEC SQL INCLUDE SQLCA; EXEC SQL INCLUDE SQLDA; /************************************************************/ /* Declare variables that are not SQL-related. */ /************************************************************/ short int i; /* Loop counter */ /************************************************************/ /* Declare the following: */ /* - Parameters used to call stored procedure GETPRML */ /* - An SQLDA for DESCRIBE PROCEDURE */ /* - An SQLDA for DESCRIBE CURSOR */ /* - Result set variable locators for up to three result */ /* sets */ /************************************************************/ EXEC SQL BEGIN DECLARE SECTION; char procnm[19]; /* INPUT parm -- PROCEDURE name */ char schema[9]; /* INPUT parm -- Users schema */ long int out_code; /* OUTPUT -- SQLCODE from the */ /* SELECT operation. */ struct { short int parmlen; char parmtxt[254]; } parmlst; /* OUTPUT -- RUNOPTS values */ /* for the matching row in */ /* catalog table SYSROUTINES */ struct indicators { short int procnm_ind; short int schema_ind; short int out_code_ind; short int parmlst_ind; } parmind; /* Indicator variable structure */ struct sqlda *proc_da; /* SQLDA for DESCRIBE PROCEDURE */ struct sqlda *res_da; /* SQLDA for DESCRIBE CURSOR static volatile SQL TYPE IS RESULT_SET_LOCATOR *loc1, *loc2, *loc3; /* Locator variables EXEC SQL END DECLARE SECTION; */ */
1076
/*************************************************************/ /* Allocate the SQLDAs to be used for DESCRIBE */ /* PROCEDURE and DESCRIBE CURSOR. Assume that at most */ /* three cursors are returned and that each result set */ /* has no more than five columns. */ /*************************************************************/ proc_da = (struct sqlda *)malloc(SQLDASIZE(3)); res_da = (struct sqlda *)malloc(SQLDASIZE(5)); /************************************************************/ /* Call the GETPRML stored procedure to retrieve the */ /* RUNOPTS values for the stored procedure. In this */ /* example, we request the PARMLIST definition for the */ /* stored procedure named DSN8EP2. */ /* */ /* The call should complete with SQLCODE +466 because */ /* GETPRML returns result sets. */ /************************************************************/ strcpy(procnm,"dsn8ep2 "); /* Input parameter -- PROCEDURE to be found */ strcpy(schema," "); /* Input parameter -- Schema name for proc */ parmind.procnm_ind=0; parmind.schema_ind=0; parmind.out_code_ind=0; /* Indicate that none of the input parameters */ /* have null values */ parmind.parmlst_ind=-1; /* The parmlst parameter is an output parm. */ /* Mark PARMLST parameter as null, so the DB2 */ /* requester doesnt have to send the entire */ /* PARMLST variable to the server. This */ /* helps reduce network I/O time, because */ /* PARMLST is fairly large. */ EXEC SQL CALL GETPRML(:procnm INDICATOR :parmind.procnm_ind, :schema INDICATOR :parmind.schema_ind, :out_code INDICATOR :parmind.out_code_ind, :parmlst INDICATOR :parmind.parmlst_ind); if(SQLCODE!=+466) /* If SQL CALL failed, */ { /* print the SQLCODE and any */ /* message tokens */ printf("SQL CALL failed due to SQLCODE = %d\n",SQLCODE); printf("sqlca.sqlerrmc = "); for(i=0;i<sqlca.sqlerrml;i++) printf("%c",sqlca.sqlerrmc[i]); printf("\n"); }
1077
else /* If the CALL worked, */ if(out_code!=0) /* Did GETPRML hit an error? */ printf("GETPRML failed due to RC = %d\n",out_code); /**********************************************************/ /* If everything worked, do the following: */ /* - Print out the parameters returned. */ /* - Retrieve the result sets returned. */ /**********************************************************/ else { printf("RUNOPTS = %s\n",parmlst.parmtxt); /* Print out the runopts list */ /********************************************************/ /* Use the statement DESCRIBE PROCEDURE to */ /* return information about the result sets in the */ /* SQLDA pointed to by proc_da: */ /* - SQLD contains the number of result sets that were */ /* returned by the stored procedure. */ /* - Each SQLVAR entry has the following information */ /* about a result set: */ /* - SQLNAME contains the name of the cursor that */ /* the stored procedure uses to return the result */ /* set. */ /* - SQLIND contains an estimate of the number of */ /* rows in the result set. */ /* - SQLDATA contains the result locator value for */ /* the result set. */ /********************************************************/ EXEC SQL DESCRIBE PROCEDURE INTO :*proc_da; /********************************************************/ /* Assume that you have examined SQLD and determined */ /* that there is one result set. Use the statement */ /* ASSOCIATE LOCATORS to establish a result set locator */ /* for the result set. */ /********************************************************/ EXEC SQL ASSOCIATE LOCATORS (:loc1) WITH PROCEDURE GETPRML; /********************************************************/ /* Use the statement ALLOCATE CURSOR to associate a */ /* cursor for the result set. */ /********************************************************/ EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1; /********************************************************/ /* Use the statement DESRIBE CURSOR to determine the */ /* columns in the result set. */ /********************************************************/ EXEC SQL DESCRIBE CURSOR C1 INTO :*res_da;
1078
/********************************************************/ /* Call a routine (not shown here) to do the following: */ /* - Allocate a buffer for data and indicator values */ /* fetched from the result table. */ /* - Update the SQLDATA and SQLIND fields in each */ /* SQLVAR of *res_da with the addresses at which to */ /* to put the fetched data and values of indicator */ /* variables. */ /********************************************************/ alloc_outbuff(res_da); /********************************************************/ /* Fetch the data from the result table. */ /********************************************************/ while(SQLCODE==0) EXEC SQL FETCH C1 USING DESCRIPTOR :*res_da; } return; }
1079
IDENTIFICATION DIVISION. PROGRAM-ID. CALPRML. ENVIRONMENT DIVISION. CONFIGURATION SECTION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT REPOUT ASSIGN TO UT-S-SYSPRINT. DATA DIVISION. FILE SECTION. FD REPOUT RECORD CONTAINS 127 CHARACTERS LABEL RECORDS ARE OMITTED DATA RECORD IS REPREC. 01 REPREC PIC X(127). WORKING-STORAGE SECTION. ***************************************************** * MESSAGES FOR SQL CALL * ***************************************************** 01 SQLREC. 02 BADMSG PIC X(34) VALUE SQL CALL FAILED DUE TO SQLCODE = . 02 BADCODE PIC +9(5) USAGE DISPLAY. 02 FILLER PIC X(80) VALUE SPACES. 01 ERRMREC. 02 ERRMMSG PIC X(12) VALUE SQLERRMC = . 02 ERRMCODE PIC X(70). 02 FILLER PIC X(38) VALUE SPACES. 01 CALLREC. 02 CALLMSG PIC X(28) VALUE GETPRML FAILED DUE TO RC = . 02 CALLCODE PIC +9(5) USAGE DISPLAY. 02 FILLER PIC X(42) VALUE SPACES. 01 RSLTREC. 02 RSLTMSG PIC X(15) VALUE TABLE NAME IS . 02 TBLNAME PIC X(18) VALUE SPACES. 02 FILLER PIC X(87) VALUE SPACES.
1080
***************************************************** * WORK AREAS * ***************************************************** 01 PROCNM PIC X(18). 01 SCHEMA PIC X(8). 01 OUT-CODE PIC S9(9) USAGE COMP. 01 PARMLST. 49 PARMLEN PIC S9(4) USAGE COMP. 49 PARMTXT PIC X(254). 01 PARMBUF REDEFINES PARMLST. 49 PARBLEN PIC S9(4) USAGE COMP. 49 PARMARRY PIC X(127) OCCURS 2 TIMES. 01 NAME. 49 NAMELEN PIC S9(4) USAGE COMP. 49 NAMETXT PIC X(18). 77 PARMIND PIC S9(4) COMP. 77 I PIC S9(4) COMP. 77 NUMLINES PIC S9(4) COMP. ***************************************************** * DECLARE A RESULT SET LOCATOR FOR THE RESULT SET * * THAT IS RETURNED. * ***************************************************** 01 LOC USAGE SQL TYPE IS RESULT-SET-LOCATOR VARYING. ***************************************************** * SQL INCLUDE FOR SQLCA * ***************************************************** EXEC SQL INCLUDE SQLCA END-EXEC. PROCEDURE DIVISION. *-----------------PROG-START. OPEN OUTPUT REPOUT. * OPEN OUTPUT FILE MOVE DSN8EP2 TO PROCNM. * INPUT PARAMETER -- PROCEDURE TO BE FOUND MOVE SPACES TO SCHEMA. * INPUT PARAMETER -- SCHEMA IN SYSROUTINES MOVE -1 TO PARMIND. * THE PARMLST PARAMETER IS AN OUTPUT PARM. * MARK PARMLST PARAMETER AS NULL, SO THE DB2 * REQUESTER DOESNT HAVE TO SEND THE ENTIRE * PARMLST VARIABLE TO THE SERVER. THIS * HELPS REDUCE NETWORK I/O TIME, BECAUSE * PARMLST IS FAIRLY LARGE. EXEC SQL CALL GETPRML(:PROCNM, :SCHEMA, :OUT-CODE, :PARMLST INDICATOR :PARMIND) END-EXEC.
1081
MAKE THE CALL IF SQLCODE NOT EQUAL TO +466 THEN * IF CALL RETURNED BAD SQLCODE MOVE SQLCODE TO BADCODE WRITE REPREC FROM SQLREC MOVE SQLERRMC TO ERRMCODE WRITE REPREC FROM ERRMREC ELSE PERFORM GET-PARMS PERFORM GET-RESULT-SET. PROG-END. CLOSE REPOUT. * CLOSE OUTPUT FILE GOBACK. PARMPRT. MOVE SPACES TO REPREC. WRITE REPREC FROM PARMARRY(I) AFTER ADVANCING 1 LINE. GET-PARMS. * IF THE CALL WORKED, IF OUT-CODE NOT EQUAL TO 0 THEN * DID GETPRML HIT AN ERROR? MOVE OUT-CODE TO CALLCODE WRITE REPREC FROM CALLREC ELSE * EVERYTHING WORKED DIVIDE 127 INTO PARMLEN GIVING NUMLINES ROUNDED * FIND OUT HOW MANY LINES TO PRINT PERFORM PARMPRT VARYING I FROM 1 BY 1 UNTIL I GREATER THAN NUMLINES. GET-RESULT-SET. ***************************************************** * ASSUME YOU KNOW THAT ONE RESULT SET IS RETURNED, * * AND YOU KNOW THE FORMAT OF THAT RESULT SET. * * ALLOCATE A CURSOR FOR THE RESULT SET, AND FETCH * * THE CONTENTS OF THE RESULT SET. * ***************************************************** EXEC SQL ASSOCIATE LOCATORS (:LOC) WITH PROCEDURE GETPRML END-EXEC. * LINK THE RESULT SET TO THE LOCATOR EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :LOC END-EXEC. * LINK THE CURSOR TO THE RESULT SET PERFORM GET-ROWS VARYING I FROM 1 BY 1 UNTIL SQLCODE EQUAL TO +100. GET-ROWS. EXEC SQL FETCH C1 INTO :NAME END-EXEC. MOVE NAME TO TBLNAME. WRITE REPREC FROM RSLTREC AFTER ADVANCING 1 LINE.
1082
*PROCESS SYSTEM(MVS); CALPRML: PROC OPTIONS(MAIN); /************************************************************/ /* Declare the parameters used to call the GETPRML */ /* stored procedure. */ /************************************************************/ DECLARE PROCNM CHAR(18), /* INPUT parm -- PROCEDURE name */ SCHEMA CHAR(8), /* INPUT parm -- Users schema */ OUT_CODE FIXED BIN(31), /* OUTPUT -- SQLCODE from the */ /* SELECT operation. */ PARMLST CHAR(254) /* OUTPUT -- RUNOPTS for */ VARYING, /* the matching row in the */ /* catalog table SYSROUTINES */ PARMIND FIXED BIN(15); /* PARMLST indicator variable */ /************************************************************/ /* Include the SQLCA */ /************************************************************/ EXEC SQL INCLUDE SQLCA; /************************************************************/ /* Call the GETPRML stored procedure to retrieve the */ /* RUNOPTS values for the stored procedure. In this */ /* example, we request the RUNOPTS values for the */ /* stored procedure named DSN8EP2. */ /************************************************************/ PROCNM = DSN8EP2; /* Input parameter -- PROCEDURE to be found */ SCHEMA = ; /* Input parameter -- SCHEMA in SYSROUTINES */ PARMIND = -1; /* The PARMLST parameter is an output parm. */ /* Mark PARMLST parameter as null, so the DB2 */ /* requester doesnt have to send the entire */ /* PARMLST variable to the server. This */ /* helps reduce network I/O time, because */ /* PARMLST is fairly large. */ EXEC SQL CALL GETPRML(:PROCNM, :SCHEMA, :OUT_CODE, :PARMLST INDICATOR :PARMIND);
IF SQLCODE=0 THEN /* If SQL CALL failed, DO; PUT SKIP EDIT(SQL CALL failed due to SQLCODE = , SQLCODE) (A(34),A(14)); PUT SKIP EDIT(SQLERRM = , SQLERRM) (A(10),A(70)); END; ELSE /* If the CALL worked, IF OUT_CODE=0 THEN /* Did GETPRML hit an error? PUT SKIP EDIT(GETPRML failed due to RC = , OUT_CODE) (A(33),A(14)); ELSE /* Everything worked. PUT SKIP EDIT(RUNOPTS = , PARMLST) (A(11),A(200)); RETURN; END CALPRML;
*/
*/ */ */
1083
v Searches the DB2 catalog table SYSTABLES for all tables in which the value of CREATOR matches the value of input parameter SCHEMA. The stored procedure uses a cursor to return the table names. The linkage convention used for this stored procedure is GENERAL. The output parameters from this stored procedure contain the SQLCODE from the SELECT statement and the value of the RUNOPTS column from SYSROUTINES. The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN, OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT) LANGUAGE C DETERMINISTIC READS SQL DATA EXTERNAL NAME "GETPRML" COLLID GETPRML ASUTIME NO LIMIT PARAMETER STYLE GENERAL STAY RESIDENT NO RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)" WLM ENVIRONMENT SAMPPROG PROGRAM TYPE MAIN SECURITY DB2 RESULT SETS 2 COMMIT ON RETURN NO;
1084
#pragma runopts(plist(os)) #include <stdlib.h> EXEC SQL INCLUDE SQLCA; /***************************************************************/ /* Declare C variables for SQL operations on the parameters. */ /* These are local variables to the C program, which you must */ /* copy to and from the parameter list provided to the stored */ /* procedure. */ /***************************************************************/ EXEC SQL BEGIN DECLARE SECTION; char PROCNM[19]; char SCHEMA[9]; char PARMLST[255]; EXEC SQL END DECLARE SECTION; /***************************************************************/ /* Declare cursors for returning result sets to the caller. */ /***************************************************************/ EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:SCHEMA; main(argc,argv) int argc; char *argv[]; { /********************************************************/ /* Copy the input parameters into the area reserved in */ /* the program for SQL processing. */ /********************************************************/ strcpy(PROCNM, argv[1]); strcpy(SCHEMA, argv[2]); /********************************************************/ /* Issue the SQL SELECT against the SYSROUTINES */ /* DB2 catalog table. */ /********************************************************/ strcpy(PARMLST, ""); /* Clear PARMLST */ EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.ROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA;
/********************************************************/ /* Copy SQLCODE to the output parameter list. */ /********************************************************/ *(int *) argv[3] = SQLCODE; /********************************************************/ /* Copy the PARMLST value returned by the SELECT back to*/ /* the parameter list provided to this stored procedure.*/ /********************************************************/ strcpy(argv[4], PARMLST); /********************************************************/ /* Open cursor C1 to cause DB2 to return a result set */ /* to the caller. */ /********************************************************/ EXEC SQL OPEN C1; }
1085
v Searches the DB2 catalog table SYSROUTINES for a row that matches the input parameters from the client program. The two input parameters contain values for NAME and SCHEMA. v Searches the DB2 catalog table SYSTABLES for all tables in which the value of CREATOR matches the value of input parameter SCHEMA. The stored procedure uses a cursor to return the table names. The linkage convention for this stored procedure is GENERAL WITH NULLS. The output parameters from this stored procedure contain the SQLCODE from the SELECT operation, and the value of the RUNOPTS column retrieved from the SYSROUTINES table. The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN, OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT) LANGUAGE C DETERMINISTIC READS SQL DATA EXTERNAL NAME "GETPRML" COLLID GETPRML ASUTIME NO LIMIT PARAMETER STYLE GENERAL WITH NULLS STAY RESIDENT NO RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)" WLM ENVIRONMENT SAMPPROG PROGRAM TYPE MAIN SECURITY DB2 RESULT SETS 2 COMMIT ON RETURN NO;
1086
#pragma runopts(plist(os)) #include <stdlib.h> EXEC SQL INCLUDE SQLCA; /***************************************************************/ /* Declare C variables used for SQL operations on the */ /* parameters. These are local variables to the C program, */ /* which you must copy to and from the parameter list provided */ /* to the stored procedure. */ /***************************************************************/ EXEC SQL BEGIN DECLARE SECTION; char PROCNM[19]; char SCHEMA[9]; char PARMLST[255]; struct INDICATORS { short int PROCNM_IND; short int SCHEMA_IND; short int OUT_CODE_IND; short int PARMLST_IND; } PARM_IND; EXEC SQL END DECLARE SECTION; /***************************************************************/ /* Declare cursors for returning result sets to the caller. */ /***************************************************************/ EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:SCHEMA; main(argc,argv) int argc; char *argv[]; { /********************************************************/ /* Copy the input parameters into the area reserved in */ /* the local program for SQL processing. */ /********************************************************/ strcpy(PROCNM, argv[1]); strcpy(SCHEMA, argv[2]); /********************************************************/ /* Copy null indicator values for the parameter list. */ /********************************************************/ memcpy(&PARM_IND,(struct INDICATORS *) argv[5], sizeof(PARM_IND));
Figure 287. A C stored procedure with linkage convention GENERAL WITH NULLS (Part 1 of 2)
1087
/********************************************************/ /* If any input parameter is NULL, return an error */ /* return code and assign a NULL value to PARMLST. */ /********************************************************/ if (PARM_IND.PROCNM_IND<0 || PARM_IND.SCHEMA_IND<0 || { *(int *) argv[3] = 9999; /* set output return code */ PARM_IND.OUT_CODE_IND = 0; /* value is not NULL */ PARM_IND.PARMLST_IND = -1; /* PARMLST is NULL */ } else { /********************************************************/ /* If the input parameters are not NULL, issue the SQL */ /* SELECT against the SYSIBM.SYSROUTINES catalog */ /* table. */ /********************************************************/ strcpy(PARMLST, ""); /* Clear PARMLST */ EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.SYSROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA; /********************************************************/ /* Copy SQLCODE to the output parameter list. */ /********************************************************/ *(int *) argv[3] = SQLCODE; PARM_IND.OUT_CODE_IND = 0; /* OUT_CODE is not NULL */ } /********************************************************/ /* Copy the RUNOPTS value back to the output parameter */ /* area. */ /********************************************************/ strcpy(argv[4], PARMLST); /********************************************************/ /* Copy the null indicators back to the output parameter*/ /* area. */ /********************************************************/ memcpy((struct INDICATORS *) argv[5],&PARM_IND, sizeof(PARM_IND)); /********************************************************/ /* Open cursor C1 to cause DB2 to return a result set */ /* to the caller. */ /********************************************************/ EXEC SQL OPEN C1; }
Figure 287. A C stored procedure with linkage convention GENERAL WITH NULLS (Part 2 of 2)
1088
The output parameters from this stored procedure contain the SQLCODE from the SELECT operation, and the value of the RUNOPTS column retrieved from the SYSROUTINES table. The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN, OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT) LANGUAGE COBOL DETERMINISTIC READS SQL DATA EXTERNAL NAME "GETPRML" COLLID GETPRML ASUTIME NO LIMIT PARAMETER STYLE GENERAL STAY RESIDENT NO RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)" WLM ENVIRONMENT SAMPPROG PROGRAM TYPE MAIN SECURITY DB2 RESULT SETS 2 COMMIT ON RETURN NO;
1089
CBL RENT IDENTIFICATION DIVISION. PROGRAM-ID. GETPRML. AUTHOR. EXAMPLE. DATE-WRITTEN. 03/25/98. ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. DATA DIVISION. FILE SECTION. WORKING-STORAGE SECTION. EXEC SQL INCLUDE SQLCA END-EXEC. *************************************************** * DECLARE A HOST VARIABLE TO HOLD INPUT SCHEMA *************************************************** 01 INSCHEMA PIC X(8). *************************************************** * DECLARE CURSOR FOR RETURNING RESULT SETS *************************************************** * EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA END-EXEC. * LINKAGE SECTION. *************************************************** * DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE *************************************************** 01 PROCNM PIC X(18). 01 SCHEMA PIC X(8). ******************************************************* * DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE ******************************************************* 01 OUT-CODE PIC S9(9) USAGE BINARY. 01 PARMLST. 49 PARMLST-LEN PIC S9(4) USAGE BINARY. 49 PARMLST-TEXT PIC X(254). PROCEDURE DIVISION USING PROCNM, SCHEMA, OUT-CODE, PARMLST.
Figure 288. A COBOL stored procedure with linkage convention GENERAL (Part 1 of 2)
******************************************************* * Issue the SQL SELECT against the SYSIBM.SYSROUTINES * DB2 catalog table. ******************************************************* EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.ROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA END-EXEC. ******************************************************* * COPY SQLCODE INTO THE OUTPUT PARAMETER AREA ******************************************************* MOVE SQLCODE TO OUT-CODE. ******************************************************* * OPEN CURSOR C1 TO CAUSE DB2 TO RETURN A RESULT SET * TO THE CALLER. ******************************************************* EXEC SQL OPEN C1 END-EXEC. PROG-END. GOBACK.
Figure 288. A COBOL stored procedure with linkage convention GENERAL (Part 2 of 2)
1090
1091
CBL RENT IDENTIFICATION DIVISION. PROGRAM-ID. GETPRML. AUTHOR. EXAMPLE. DATE-WRITTEN. 03/25/98. ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. DATA DIVISION. FILE SECTION. * WORKING-STORAGE SECTION. * EXEC SQL INCLUDE SQLCA END-EXEC. * *************************************************** * DECLARE A HOST VARIABLE TO HOLD INPUT SCHEMA *************************************************** 01 INSCHEMA PIC X(8). *************************************************** * DECLARE CURSOR FOR RETURNING RESULT SETS *************************************************** * EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA END-EXEC. * LINKAGE SECTION. *************************************************** * DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE *************************************************** 01 PROCNM PIC X(18). 01 SCHEMA PIC X(8). *************************************************** * DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE *************************************************** 01 OUT-CODE PIC S9(9) USAGE BINARY. 01 PARMLST. 49 PARMLST-LEN PIC S9(4) USAGE BINARY. 49 PARMLST-TEXT PIC X(254). *************************************************** * DECLARE THE STRUCTURE CONTAINING THE NULL * INDICATORS FOR THE INPUT AND OUTPUT PARAMETERS. *************************************************** 01 IND-PARM. 03 PROCNM-IND PIC S9(4) USAGE BINARY. 03 SCHEMA-IND PIC S9(4) USAGE BINARY. 03 OUT-CODE-IND PIC S9(4) USAGE BINARY. 03 PARMLST-IND PIC S9(4) USAGE BINARY.
Figure 289. A COBOL stored procedure with linkage convention GENERAL WITH NULLS (Part 1 of 2)
1092
PROCEDURE DIVISION USING PROCNM, SCHEMA, OUT-CODE, PARMLST, IND-PARM. ******************************************************* * If any input parameter is null, return a null value * for PARMLST and set the output return code to 9999. ******************************************************* IF PROCNM-IND < 0 OR SCHEMA-IND < 0 MOVE 9999 TO OUT-CODE MOVE 0 TO OUT-CODE-IND MOVE -1 TO PARMLST-IND ELSE ******************************************************* * Issue the SQL SELECT against the SYSIBM.SYSROUTINES * DB2 catalog table. ******************************************************* EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.SYSROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA END-EXEC MOVE 0 TO PARMLST-IND ******************************************************* * COPY SQLCODE INTO THE OUTPUT PARAMETER AREA ******************************************************* MOVE SQLCODE TO OUT-CODE MOVE 0 TO OUT-CODE-IND. * ******************************************************* * OPEN CURSOR C1 TO CAUSE DB2 TO RETURN A RESULT SET * TO THE CALLER. ******************************************************* EXEC SQL OPEN C1 END-EXEC. PROG-END. GOBACK.
Figure 289. A COBOL stored procedure with linkage convention GENERAL WITH NULLS (Part 2 of 2)
1093
WLM ENVIRONMENT SAMPPROG PROGRAM TYPE MAIN SECURITY DB2 RESULT SETS 0 COMMIT ON RETURN NO;
*PROCESS SYSTEM(MVS); GETPRML: PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST) OPTIONS(MAIN NOEXECOPS REENTRANT); DECLARE PROCNM CHAR(18), SCHEMA CHAR(8), /* INPUT parm -- PROCEDURE name */ /* INPUT parm -- Users SCHEMA */ OUTPUT -- SQLCODE from */ the SELECT operation. */ OUTPUT -- RUNOPTS for */ the matching row in */ SYSIBM.SYSROUTINES */
OUT_CODE FIXED BIN(31), /* /* PARMLST CHAR(254) /* VARYING; /* /* EXEC SQL INCLUDE SQLCA;
/************************************************************/ /* Execute SELECT from SYSIBM.SYSROUTINES in the catalog. */ /************************************************************/ EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.SYSROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA; OUT_CODE = SQLCODE; RETURN; END GETPRML; /* return SQLCODE to caller */
1094
PROGRAM TYPE MAIN SECURITY DB2 RESULT SETS 0 COMMIT ON RETURN NO;
|
*PROCESS SYSTEM(MVS); GETPRML: PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST, INDICATORS) OPTIONS(MAIN NOEXECOPS REENTRANT); DECLARE PROCNM CHAR(18), SCHEMA CHAR(8), /* INPUT parm -- PROCEDURE name */ /* INPUT parm -- Users schema */ */ */ */ */ */ */ */
OUT_CODE FIXED BIN(31), /* OUTPUT -- SQLCODE from /* the SELECT operation. PARMLST CHAR(254) /* OUTPUT -- PARMLIST for VARYING; /* the matching row in /* SYSIBM.SYSROUTINES DECLARE 1 INDICATORS, /* Declare null indicators for /* input and output parameters. 3 PROCNM_IND FIXED BIN(15), 3 SCHEMA_IND FIXED BIN(15), 3 OUT_CODE_IND FIXED BIN(15), 3 PARMLST_IND FIXED BIN(15); EXEC SQL INCLUDE SQLCA; IF PROCNM_IND<0 | SCHEMA_IND<0 THEN DO; OUT_CODE = 9999; OUT_CODE_IND = 0;
*/ */
/* Output return code is not NULL.*/ PARMLST_IND = -1; /* Assign NULL value to PARMLST. */ END; ELSE /* If input parms are not NULL, */ DO; /* */ /************************************************************/ /* Issue the SQL SELECT against the SYSIBM.SYSROUTINES */ /* DB2 catalog table. */ /************************************************************/ EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.SYSROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA; PARMLST_IND = 0; /* Mark PARMLST as not NULL. */ OUT_CODE = SQLCODE; OUT_CODE_IND = 0; OUT_CODE_IND = 0; END; RETURN; END GETPRML; /* return SQLCODE to caller */
Figure 291. A PL/I stored procedure with linkage convention GENERAL WITH NULLS
1095
1096
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Assume that the PARTLIST table is populated with the values that are in Table 199:
Table 199. PARTLIST table PART 00 00 01 01 01 01 02 02 03 04 04 05 05 06 06 07 07 SUBPART 01 05 02 03 04 06 05 06 07 08 09 10 11 12 13 14 12 QUANTITY 5 3 2 3 4 3 7 6 6 10 11 10 10 10 10 8 8
Example 1: Single level explosion: Single level explosion answers the question, What parts are needed to build the part identified by 01?. The list will include the direct subparts, subparts of the subparts and so on. However, if a part is used multiple times, its subparts are only listed once.
WITH RPL (PART, SUBPART, QUANTITY) AS (SELECT ROOT.PART, ROOT.SUBPART, ROOT.QUANTITY FROM PARTLIST ROOT WHERE ROOT.PART = 01 UNION ALL
1097
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SELECT CHILD.PART, CHILD.SUBPART, CHILD.QUANTITY FROM RPL PARENT, PARTLIST CHILD WHERE PARENT.SUBPART = CHILD.PART) SELECT DISTINCT PART, SUBPART, QUANTITY FROM RPL ORDER BY PART, SUBPART, QUANTITY;
The preceding query includes a common table expression, identified by the name RPL, that expresses the recursive part of this query. It illustrates the basic elements of a recursive common table expression. The first operand (fullselect) of the UNION, referred to as the initialization fullselect, gets the direct subparts of part 01. The FROM clause of this fullselect refers to the source table and will never refer to itself (RPL in this case). The result of this first fullselect goes into the common table expression RPL. As in this example, the UNION must always be a UNION ALL. The second operand (fullselect) of the UNION uses RPL to compute subparts of subparts by using the FROM clause to refer to the common table expression RPL and the source table PARTLIST with a join of a part from the source table (child) to a subpart of the current result contained in RPL (parent). The result goes then back to RPL again. The second operand of UNION is used repeatedly until no more subparts exist. The SELECT DISTINCT in the main fullselect of this query ensures the same part/subpart is not listed more than once. The result of the query is shown in Table 200:
Table 200. Result table for example 1 PART 01 01 01 01 02 02 03 04 04 05 05 06 06 07 17 SUBPART 02 03 04 06 05 06 07 08 09 10 11 12 13 12 14 QUANTITY 2 3 4 3 7 6 6 10 11 10 10 10 10 8 8
Observe in the result that part 01 contains subpart 02 which contains subpart 06 and so on. Further, notice that part 06 is reached twice, once through part 01 directly and another time through part 02. In the output, however, the subparts of part 06 are listed only once (this is the result of using a SELECT DISTINCT).
1098
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Remember that with recursive common table expressions it is possible to introduce an infinite loop. In this example, an infinite loop would be created if the search condition of the second operand that joins the parent and child tables was coded as follows:
WHERE PARENT.SUBPART = CHILD.SUBPART
This infinite loop is created by not coding what is intended. You should carefully determining what to code so that there is a definite end of the recursion cycle. The result produced by this example could be produced in an application program without using a recursive common table expression. However, such an application would require coding a different query for every level of recursion. Furthermore, the application would need to put all of the results back in the database to order the final result. This approach complicates the application logic and does not perform well. The application logic becomes more difficult and inefficient for other bill of material queries, such as summarized and indented explosion queries. Example 2: Summarized explosion: A summarized explosion answers the question, What is the total quantity of each part required to build part 01? The main difference from a single level explosion is the need to aggregate the quantities. A single level explosion indicates the quantity of subparts required for the part whenever it is required. It does not indicate how many of each subpart is needed to build part 01.
WITH RPL (PART, SUBPART, QUANTITY) AS ( SELECT ROOT.PART, ROOT.SUBPART, ROOT.QUANTITY FROM PARTLIST ROOT WHERE ROOT.PART = 01 UNION ALL SELECT PARENT.PART, CHILD.SUBPART, PARENT.QUANTITY*CHILD.QUANTITY FROM RPL PARENT, PARTLIST CHILD WHERE PARENT.SUBPART = CHILD.PART ) SELECT PART, SUBPART, SUM(QUANTITY) AS "Total QTY Used" FROM RPL GROUP BY PART, SUBPART ORDER BY PART, SUBPART;
In the preceding query, the select list of the second operand of the UNION in the recursive common table expression, identified by the name RPL, shows the aggregation of the quantity. To determine how many of each subpart is used, the quantity of the parent is multiplied by the quantity per parent of a child. If a part is used multiple times in different places, it requires another final aggregation. This is done by the grouping the parts and subparts in the common table expression RPL and using the SUM column function in the select list of the main fullselect. The result of the query is shown in Table 201:
Table 201. Result table for example 2 PART 01 01 01 01 01 SUBPART 02 03 04 05 06 Total QTY Used 2 3 4 14 15
1099
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 201. Result table for example 2 (continued) PART 01 01 01 01 01 01 01 01 SUBPART 07 08 09 10 11 12 13 14 Total QTY Used 18 40 44 140 140 294 150 144
Consider the total quantity for subpart 06. The value of 15 is derived from a quantity of 3 directly for part 01 and a quantity of 6 for part 02 which is needed two times by part 01. Example 3: Controlling depth: You can control the depth of a recursive query to answer the question, What are the first two levels of parts that are needed to build part 01? For the sake of clarity in this example, the level of each part is included in the result table.
WITH RPL (LEVEL, PART, SUBPART, QUANTITY) AS ( SELECT 1, ROOT.PART, ROOT.SUBPART, ROOT.QUANTITY FROM PARTLIST ROOT WHERE ROOT.PART = 01 UNION ALL SELECT PARENT.LEVEL+1, CHILD.PART, CHILD.SUBPART, CHILD.QUANTITY FROM RPL PARENT, PARTLIST CHILD WHERE PARENT.SUBPART = CHILD.PART AND PARENT.LEVEL < 2 ) SELECT PART, LEVEL, SUBPART, QUANTITY FROM RPL;
This query is similar to the query in example 1. The column LEVEL is introduced to count the level each subpart is from the original part. In the initialization fullselect, the value for the LEVEL column is initialized to 1. In the subsequent fullselect, the level from the parent table increments by 1. To control the number of levels in the result, the second fullselect includes the condition that the level of the parent must be less than 2. This ensures that the second fullselect only processes children to the second level. The result of the query is shown in Table 202:
Table 202. Result table for example 3 PART 01 01 01 01 02 02 LEVEL 1 1 1 1 2 2 SUBPART 02 03 04 06 05 06 QUANTITY 2 3 4 3 7 6
1100
| | | | | | | | |
Table 202. Result table for example 3 (continued) PART 03 04 04 06 06 LEVEL 2 2 2 2 2 SUBPART 07 08 09 12 13 QUANTITY 6 10 11 10 10
1101
1102
1103
v Using the SUBSTR function to convert a varying-length string to a fixed-length string v Appending additional blanks to the REBIND PLAN and REBIND PACKAGE subcommands, so that the DSN command processor can accept the record length as valid input If the SELECT statement returns rows, then DSNTIAUL generates REBIND subcommands for the plans or packages identified in the returned rows. Put those subcommands in a sequential data set, where you can then edit them. For REBIND PACKAGE subcommands, delete any extraneous blanks in the package name, using either TSO edit commands or the DB2 CLIST DSNTEDIT. For both REBIND PLAN and REBIND PACKAGE subcommands, add the DSN command that the statement needs as the first line in the sequential data set, and add END as the last line, using TSO edit commands. When you have edited the sequential data set, you can run it to rebind the selected plans or packages. If the SELECT statement returns no qualifying rows, then DSNTIAUL does not generate REBIND subcommands. The examples in this section generate REBIND subcommands that work in DB2 UDB for z/OS Version 8. You might need to modify the examples for prior releases of DB2 that do not allow all of the same syntax. Example 1: REBIND all plans without terminating because of unavailable resources.
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN;
Example 2: REBIND all versions of all packages without terminating because of unavailable resources.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE;
Example 3: REBIND all plans bound before a given date and time.
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN WHERE BINDDATE <= yyyymmdd AND BINDTIME <= hhmmssth;
where yyyymmdd represents the date portion and hhmmssth represents the time portion of the timestamp string. Example 4: REBIND all versions of all packages bound before a given date and time.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE BINDTIME <= timestamp;
1104
Example 5: REBIND all plans bound since a given date and time.
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN WHERE BINDDATE >= yyyymmdd AND BINDTIME >= hhmmssth;
where yyyymmdd represents the date portion and hhmmssth represents the time portion of the timestamp string. Example 6: REBIND all versions of all packages bound since a given date and time.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT.CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE BINDTIME >= timestamp;
where timestamp is an ISO timestamp string. Example 7: REBIND all plans bound within a given date and time range.
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN WHERE (BINDDATE >= yyyymmdd AND BINDTIME >= hhmmssth) AND BINDDATE <= yyyymmdd AND BINDTIME <= hhmmssth);
where yyyymmdd represents the date portion and hhmmssth represents the time portion of the timestamp string. Example 8: REBIND all versions of all packages bound within a given date and time range.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE BINDTIME >= timestamp1 AND BINDTIME <= timestamp2;
where timestamp1 and timestamp2 are ISO timestamp strings. Example 9: REBIND all invalid plans.
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN WHERE VALID = N;
Example 11: REBIND all plans bound with ISOLATION level of cursor stability.
Appendix F. REBIND subcommands for lists of plans or packages
1105
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN WHERE ISOLATION = S;
Example 12: REBIND all versions of all packages that allow CPU and/or I/O parallelism.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE DEGREE=ANY;
The date and time period has the following format: YYYY MM DD hh mm ss The four-digit year. For example: 2003. The two-digit month, which can be a value between 01 and 12. The two-digit day, which can be a value between 01 and 31. The two-digit hour, which can be a value between 01 and 24. The two-digit minute, which can be a value between 00 and 59. The two-digit second, which can be a value between 00 and 59.
//REBINDS JOB MSGLEVEL=(1,1),CLASS=A,MSGCLASS=A,USER=SYSADM, // REGION=1024K //*********************************************************************/ //SETUP EXEC PGM=IKJEFT01 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB81) PARMS(SQL) LIB(DSN810.RUNLIB.LOAD) END //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSPUNCH DD SYSOUT=* //SYSREC00 DD DSN=SYSADM.SYSTSIN.DATA, // UNIT=SYSDA,DISP=SHR
Figure 292. Example JCL: Rebind all packages that were bound within a specified date and time period (Part 1 of 2)
1106
//*********************************************************************/ //* //* GENER= <SUBCOMMANDS TO REBIND ALL PACKAGES BOUND IN 1994 //* //*********************************************************************/ //SYSIN DD * SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE BINDTIME >= YYYY-MM-DD-hh.mm.ss AND BINDTIME <= YYYY-MM-DD-hh.mm.ss; /* //*********************************************************************/ //* //* STRIP THE BLANKS OUT OF THE REBIND SUBCOMMANDS //* //*********************************************************************/ //STRIP EXEC PGM=IKJEFT01 //SYSPROC DD DSN=SYSADM.DSNCLIST,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SYSTSIN DD * DSNTEDIT SYSADM.SYSTSIN.DATA //SYSIN DD DUMMY /* //*********************************************************************/ //* //* PUT IN THE DSN COMMAND STATEMENTS //* //*********************************************************************/ //EDIT EXEC PGM=IKJEFT01 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * EDIT SYSADM.SYSTSIN.DATA DATA NONUM TOP INSERT DSN SYSTEM(DSN) BOTTOM INSERT END TOP LIST * 99999 END SAVE /* //*********************************************************************/ //* //* EXECUTE THE REBIND PACKAGE SUBCOMMANDS THROUGH DSN //* //*********************************************************************/ //LOCAL EXEC PGM=IKJEFT01 //DBRMLIB DD DSN=DSN810.DBRMLIB.DATA, // DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD DSN=SYSADM.SYSTSIN.DATA, // UNIT=SYSDA,DISP=SHR /*
Figure 292. Example JCL: Rebind all packages that were bound within a specified date and time period (Part 2 of 2)
Figure 293 on page 1108 shows some sample JCL for rebinding all plans bound without specifying the DEGREE keyword on BIND with DEGREE(ANY).
1107
//REBINDS JOB MSGLEVEL=(1,1),CLASS=A,MSGCLASS=A,USER=SYSADM, // REGION=1024K //*********************************************************************/ //SETUP EXEC TSOBATCH //SYSPRINT DD SYSOUT=* //SYSPUNCH DD SYSOUT=* //SYSREC00 DD DSN=SYSADM.SYSTSIN.DATA, // UNIT=SYSDA,DISP=SHR //*********************************************************************/ //* //* REBIND ALL PLANS THAT WERE BOUND WITHOUT SPECIFYING THE DEGREE //* KEYWORD ON BIND WITH DEGREE(ANY) //* //*********************************************************************/ //SYSTSIN DD * DSN S(DSN) RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB81) PARM(SQL) END //SYSIN DD * SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) DEGREE(ANY) ,1,45) FROM SYSIBM.SYSPLAN WHERE DEGREE = ; /* //*********************************************************************/ //* //* PUT IN THE DSN COMMAND STATEMENTS //* //*********************************************************************/ //EDIT EXEC PGM=IKJEFT01 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * EDIT SYSADM.SYSTSIN.DATA DATA NONUM TOP INSERT DSN S(DSN) BOTTOM INSERT END TOP LIST * 99999 END SAVE /* //*********************************************************************/ //* //* EXECUTE THE REBIND SUBCOMMANDS THROUGH DSN //* //*********************************************************************/ //REBIND EXEC PGM=IKJEFT01 //STEPLIB DD DSN=SYSADM.TESTLIB,DISP=SHR // DD DSN=DSN810.SDSNLOAD,DISP=SHR //DBRMLIB DD DSN=SYSADM.DBRMLIB.DATA,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SYSTSIN DD DSN=SYSADM.SYSTSIN.DATA,DISP=SHR //SYSIN DD DUMMY /*
Figure 293. Example JCL: Rebind selected plans with a different bind option
1108
Reserved words
Table 203 on page 1110 lists the words that cannot be used as ordinary identifiers in some contexts because they might be interpreted as SQL keywords. For example, ALL cannot be a column name in a SELECT statement. Each word, however, can be used as a delimited identifier in contexts where it otherwise cannot be used as an ordinary identifier. For example, if the quotation mark (") is the escape character that begins and ends delimited identifiers, ALL can appear as a column name in a SELECT statement. In addition, some sections of this book might indicate words that cannot be used in the specific context that is being described.
1109
Table 203. SQL reserved words ADD AFTER ALL ALLOCATE ALLOW ALTER AND ANY AS ASENSITIVE2 ASSOCIATE ASUTIME AUDIT AUX AUXILIARY BEFORE BEGIN BETWEEN BUFFERPOOL BY CALL CAPTURE CASCADED CASE CAST CCSID CHAR CHARACTER CHECK CLOSE CLUSTER COLLECTION COLLID COLUMN COMMENT COMMIT CONCAT CONDITION CONNECT CONNECTION CONSTRAINT CONTAINS CONTINUE CREATE CURRENT CURRENT_DATE CURRENT_LC_CTYPE CURRENT_PATH CURRENT_TIME CURRENT_TIMESTAMP CURSOR DATA Notes: 1. COBOL only 2. New reserved word for Version 8. DATABASE DAY DAYS DBINFO DECLARE DEFAULT DELETE DESCRIPTOR DETERMINISTIC DISALLOW DISTINCT DO DOUBLE DROP DSSIZE DYNAMIC EDITPROC ELSE ELSEIF ENCODING ENCRYPTION2 END ENDING2 END-EXEC1 ERASE ESCAPE EXCEPT EXCEPTION2 EXECUTE EXISTS EXIT EXPLAIN EXTERNAL FENCED FETCH FIELDPROC FINAL FOR FREE FROM FULL FUNCTION GENERATED GET GLOBAL GO GOTO GRANT GROUP HANDLER HAVING HOLD2 HOUR HOURS IF IMMEDIATE IN INCLUSIVE2 INDEX INHERIT INNER INOUT INSENSITIVE INSERT INTO IS ISOBID ITERATE2 JAR JOIN KEY LABEL LANGUAGE LC_CTYPE LEAVE LEFT LIKE LOCAL LOCALE LOCATOR LOCATORS LOCK LOCKMAX LOCKSIZE LONG LOOP MAINTAINED2 MATERIALIZED2 MICROSECOND MICROSECONDS MINUTE MINUTES MODIFIES MONTH MONTHS NEXTVAL2 NO NONE2 NOT NULL NULLS NUMPARTS OBID OF ON OPEN OPTIMIZATION OPTIMIZE OR ORDER OUT OUTER PACKAGE PARAMETER PART PADDED2 PARTITION2 PARTITIONED2 PARTITIONING2 PATH PIECESIZE PLAN PRECISION PREPARE PREVVAL2 PRIQTY PRIVILEGES PROCEDURE PROGRAM PSID QUERY2 QUERYNO READS REFERENCES REFRESH2 RESIGNAL2 RELEASE RENAME REPEAT RESTRICT RESULT RESULT_SET_LOCATOR RETURN RETURNS REVOKE RIGHT ROLLBACK ROWSET2 RUN SAVEPOINT SCHEMA SCRATCHPAD SECOND SECONDS SECQTY2 SECURITY2 SEQUENCE2 SELECT SENSITIVE SET SIGNAL2 SIMPLE SOME SOURCE SPECIFIC STANDARD STATIC STAY STOGROUP STORES STYLE SUMMARY2 SYNONYM SYSFUN SYSIBM SYSPROC SYSTEM TABLE TABLESPACE THEN TO TRIGGER UNDO UNION UNIQUE UNTIL UPDATE USER USING VALIDPROC VALUE2 VALUES VARIABLE2 VARIANT VCAT VIEW VOLATILE2 VOLUMES WHEN WHENEVER WHERE WHILE WITH WLM XMLELEMENT2 YEAR YEARS
1110
IBM SQL has additional reserved words that DB2 UDB for z/OS does not enforce. Therefore, we suggest that you do not use these additional reserved words as ordinary identifiers in names that have a continuing use. See IBM DB2 Universal Database SQL Reference for Cross-Platform Development for a list of the words.
1111
1112
Executable Y Y Y
ASSOCIATE LOCATORS
1
CONNECT CREATE
DECLARE CURSOR DECLARE GLOBAL TEMPORARY TABLE DECLARE STATEMENT DECLARE TABLE DELETE DESCRIBE prepared statement or table DESCRIBE CURSOR DESCRIBE INPUT DESCRIBE PROCEDURE DROP
2
Y Y Y Y
END DECLARE SECTION EXECUTE EXECUTE IMMEDIATE EXPLAIN FETCH FREE LOCATOR Y Y Y Y Y Y Y Y Y Y
Y Y Y
1113
Table 204. Actions allowed on SQL statements in DB2 UDB for z/OS (continued) Interactively or dynamically prepared Y Processed by Requesting system Server Y Y Y Y Y Y Y Y Y Y Y Y
8
SQL statement HOLD LOCATOR INCLUDE INSERT LABEL LOCK TABLE OPEN PREPARE
1
Executable Y
Precompiler
Y Y Y
Y Y Y Y Y4
| REFRESH TABLE
RELEASE connection RELEASE SAVEPOINT RENAME REVOKE
2 2
Y Y Y Y Y Y Y
Y Y Y Y Y Y Y Y
ROLLBACK
Y Y Y Y Y Y Y Y
SAVEPOINT SELECT INTO SET CONNECTION SET CURRENT APPLICATION ENCODING SCHEME SET CURRENT DEGREE SET CURRENT LC_CTYPE
Y Y Y
Y Y Y
Y Y Y Y Y Y Y Y
Y Y
Y Y Y Y Y Y Y Y Y Y Y
SET host-variable = CURRENT APPLICATION ENCODING SCHEME SET host-variable = CURRENT DATE SET host-variable = CURRENT DEGREE SET host-variable = CURRENT MEMBER
Y Y Y
Y Y Y
1114
Table 204. Actions allowed on SQL statements in DB2 UDB for z/OS (continued) Interactively or dynamically prepared Processed by Requesting system Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Server Precompiler
SQL statement SET host-variable = CURRENT PACKAGESET SET host-variable = CURRENT PATH SET host-variable = CURRENT QUERY OPTIMIZATION LEVEL SET host-variable = CURRENT SERVER SET host-variable = CURRENT SQLID SET host-variable = CURRENT TIME SET host-variable = CURRENT TIMESTAMP SET host-variable = CURRENT TIMEZONE SET PATH
Executable Y Y Y Y Y Y Y Y Y Y Y Y Y Y
SET SCHEMA SET transition-variable = CURRENT DATE SET transition-variable = CURRENT DEGREE SET transition-variable = CURRENT PATH SET transition-variable = CURRENT QUERY OPTIMIZATION LEVEL SET transition-variable = CURRENT SQLID SET transition-variable = CURRENT TIME SET transition-variable = CURRENT TIMESTAMP SET transition-variable = CURRENT TIMEZONE SIGNAL SQLSTATE6 UPDATE VALUES
6 7
Y Y Y Y Y Y Y Y Y
Y Y Y Y Y Y Y Y Y
1115
Table 204. Actions allowed on SQL statements in DB2 UDB for z/OS (continued) Interactively or dynamically prepared Processed by Requesting system Server Precompiler
Executable
1. The statement can be dynamically prepared. It cannot be issued dynamically. 2. The statement can be dynamically prepared only if DYNAMICRULES run behavior is implicitly or explicitly specified. 3. The statement can be dynamically prepared, but only from an ODBC or CLI driver that supports dynamic CALL statements. 4. The requesting system processes the PREPARE statement when the statement being prepared is ALLOCATE CURSOR or ASSOCIATE LOCATORS. 5. The value to which special register CURRENT SQLID is set is used as the SQL authorization ID and the implicit qualifier for dynamic SQL statements only when DYNAMICRULES run behavior is in effect. The CURRENT SQLID value is ignored for the other DYNAMICRULES behaviors. 6. This statement can be used only in the triggered action of a trigger. 7. Local special registers can be referenced in a VALUES INTO statement if it results in the assignment of a single host-variable, not if it results in setting more than one value. 8. Some processing also occurs at the requester.
NO SQL
CONTAINS SQL
Y Y
1
Y Y Y2 Y Y
Y Y
2
Y Y
2
Y Y
Y Y
Y Y Y
CONNECT CREATE
1116
Table 205. SQL statements in external user-defined functions and stored procedures (continued) Level of SQL access SQL statement DECLARE CURSOR DECLARE GLOBAL TEMPORARY TABLE DECLARE STATEMENT DECLARE TABLE DELETE DESCRIBE DESCRIBE CURSOR DESCRIBE INPUT DESCRIBE PROCEDURE DROP END DECLARE SECTION EXECUTE EXECUTE IMMEDIATE EXPLAIN FETCH FREE LOCATOR Y Y Y Y Y Y
1
NO SQL Y1
CONTAINS SQL Y
Y1 Y
1
Y Y
Y Y
Y Y Y
Y Y Y Y
Y Y Y Y Y
Y Y Y
4 4
Y Y Y
4 4
Y Y Y Y Y Y Y Y
GET DIAGNOSTICS GRANT HOLD LOCATOR INCLUDE INSERT LABEL LOCK TABLE OPEN PREPARE Y
1
Y Y
Y Y
Y Y Y Y
Y Y
Y Y Y Y
Y Y Y
Y Y Y
Y Y Y Y
Y Y Y
5
Y Y
1117
Table 205. SQL statements in external user-defined functions and stored procedures (continued) Level of SQL access SQL statement SET special register SET transition-variable Assignment SIGNAL SQLSTATE UPDATE VALUES VALUES INTO WHENEVER Notes: 1. Although the SQL option implies that no SQL statements can be specified, non-executable statements are not restricted. 2. The stored procedure that is called must have the same or more restrictive level of SQL data access than the current level in effect. For example, a routine defined as MODIFIES SQL DATA can call a stored procedure defined as MODIFIES SQL DATA, READS SQL DATA, or CONTAINS SQL. A routine defined as CONTAINS SQL can only call a procedure defined as CONTAINS SQL. 3. The COMMIT statement cannot be executed in a user-defined function. The COMMIT statement cannot be executed in a stored procedure if the procedure is in the calling chain of a user-defined function or trigger. 4. The statement specified for the EXECUTE statement must be a statement that is allowed for the particular level of SQL data access in effect. For example, if the level in effect is READS SQL DATA, the statement must not be an INSERT, UPDATE, or DELETE. 5. The statement is supported only if it does not contain a subquery or query-expression. 6. RELEASE SAVEPOINT, SAVEPOINT, and ROLLBACK (with the TO SAVEPOINT clause) cannot be executed from a user-defined function. 7. If the ROLLBACK statement (without the TO SAVEPOINT clause) is executed in a user-defined function, an error is returned to the calling program, and the application is placed in a must rollback state. 8. The ROLLBACK statement (without the TO SAVEPOINT clause) cannot be executed in a stored procedure if the procedure is in the calling chain of a user-defined function or trigger. Y
1
NO SQL
CONTAINS SQL Y Y
5
Y Y
5
Y Y Y
Y Y
| |
9. If the SELECT statement contains an INSERT in its FROM clause (INSERT within SELECT), the SQL level of access must be MODIFIES SQL DATA.
1118
Table 206. Valid SQL statements in an SQL procedure body SQL statement is... SQL statement ALLOCATE CURSOR ALTER DATABASE ALTER FUNCTION ALTER INDEX ALTER PROCEDURE Y Y Y Y Y Y Y Y Y The only statement Nested in a in the procedure compound statement Y Y Y Y Y Y Y Y Y Y Y
ALTER VIEW ASSOCIATE LOCATORS BEGIN DECLARE SECTION CALL CLOSE COMMENT COMMIT
1
Y Y Y Y Y Y Y Y Y Y Y
2 2
Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y
CONNECT CREATE ALIAS CREATE DATABASE CREATE DISTINCT TYPE CREATE FUNCTION
Y Y Y Y Y Y
CREATE SEQUENCE CREATE STOGROUP CREATE SYNONYM CREATE TABLE CREATE TABLESPACE CREATE TRIGGER CREATE VIEW DECLARE CURSOR DECLARE GLOBAL TEMPORARY TABLE DECLARE STATEMENT DECLARE TABLE DELETE DESCRIBE prepared statement or table DESCRIBE CURSOR
Y Y
1119
Table 206. Valid SQL statements in an SQL procedure body (continued) SQL statement is... SQL statement DESCRIBE INPUT DESCRIBE PROCEDURE DROP END DECLARE SECTION EXECUTE EXECUTE IMMEDIATE EXPLAIN FETCH FREE LOCATOR Y Y Y Y Y Y The only statement Nested in a in the procedure compound statement
GET DIAGNOSTICS GRANT HOLD LOCATOR INCLUDE INSERT LABEL LOCK TABLE OPEN PREPARE FROM
Y Y
Y Y
Y Y Y
Y Y Y Y Y
Y Y Y Y Y Y Y Y
Y Y Y Y Y Y Y Y
Y Y
3
Y Y
Y
3
SET transition-variable Assignment SIGNAL SQLSTATE UPDATE VALUES VALUES INTO WHENEVER
1120
Table 206. Valid SQL statements in an SQL procedure body (continued) SQL statement is... SQL statement Notes: 1. The COMMIT statement and the ROLLBACK statement (without the TO SAVEPOINT clause) cannot be executed in a stored procedure if the procedure is in the calling chain of a user-defined function or trigger 2. CREATE FUNCTION with LANGUAGE SQL (specified either implicitly or explicitly) and CREATE PROCEDURE with LANGUAGE SQL are not allowed within the body of an SQL procedure. 3. SET host-variable assignment, SET transition-variable assignment, and SET special register are the SQL SET statements not the SQL procedure assignment statement 4. The SET SCHEMA statement cannot be executed within a SQL procedure. The only statement Nested in a in the procedure compound statement
1121
1122
Generic option description Package replacement: protect existing packages Package replacement: replace existing packages Package replacement: version name Statement string delimiter DRDA access: SQL CONNECT (Type 1) DRDA access: SQL CONNECT (Type 2)
Equivalent for Requesting DB2 ACTION(ADD) ACTION(REPLACE) ACTION(REPLACE REPLVER (version-id)) APOSTSQL/QUOTESQL CONNECT(1) CONNECT(2)
DB2 Server Support Supported Supported Supported Supported Supported Supported Supported Supported Not supported
Block protocol: Do not block data CURRENTDATA(YES) for an ambiguous cursor Block protocol: Block data when possible Block protocol: Never block data Name of remote database Date format of statement Protocol for remote access Maximum decimal precision: 15 Maximum decimal precision: 31 Defer preparation of dynamic SQL Do not defer preparation of dynamic SQL Dynamic SQL Authorization Encoding scheme for static SQL statements Explain option Immediately write group bufferpool-dependent page sets or partitions in a data sharing environment Package isolation level: CS CURRENTDATA(NO) (Not available) CURRENTSERVER(location name) DATE DBPROTOCOL DEC(15) DEC(31) DEFER(PREPARE) NODEFER(PREPARE) DYNAMICRULES ENCODING EXPLAIN IMMEDWRITE
B P B P P B B B B B B
Supported as a BIND PLAN option Supported Not supported Supported Supported Supported Supported Supported Not supported Supported Supported
ISOLATION(CS)
Supported
1123
Table 207. Program preparation options for packages (continued) Bind or Precompile Option B B B B P B B B
Generic option description Package isolation level: RR Package isolation level: RS Package isolation level: UR Keep prepared statements after commit points Consistency token Package name Package owner
Equivalent for Requesting DB2 ISOLATION(RR) ISOLATION(RS) ISOLATION(UR) KEEPDYNAMIC LEVEL MEMBER OWNER
DB2 Server Support Supported Supported Supported Supported Supported Supported Supported Supported
Schema name list for user-defined PATH functions, distinct types, and stored procedures Statement decimal delimiter Default qualifier Use access path hints Lock release option PERIOD/COMMA QUALIFIER OPTHINT RELEASE REOPT(ALWAYS) REOPT(NONE) REOPT(ONCE)
P B B B B B B B B
Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported
| Choose and cache access path at | only the first run or open time
Creation control: create a package SQLERROR(CONTINUE) despite errors Creation control: create no package if there are errors Creation control: create no package Time format of statement Existence checking: full Existence checking: deferred Package version SQLERROR(NO PACKAGE) (Not available) TIME VALIDATE(BIND) VALIDATE(RUN) VERSION
P B B P
Supported Supported Supported Supported Supported Not supported Not supported Not supported Not supported Not supported Not supported Ignored when received; no error is returned Supported
Default character subtype: system (Not available) default Default character subtype: BIT Default character subtype: SBCS Default character subtype: DBCS Default character CCSID: SBCS Default character CCSID: Mixed (Not available) (Not available) (Not available) (Not available) (Not available)
Default character CCSID: Graphic (Not available) Package label Privilege inheritance: retain (Not available) default
1124
Table 207. Program preparation options for packages (continued) Bind or Precompile Option
1125
1126
| | | |
# # # # # # # #
1127
# # #
v The MQ XML stored procedures All of the MQ XML stored procedures have been deprecated. These stored procedures perform the following functions:
Function Returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified by an enabled XML collection. Returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHRED does not require an enabled XML collection. Returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified by an enabled XML collection. DXXMQINSERTCLOB is intended for an XML document with a length of up to 1MB. Returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHREDCLOB does not require an enabled XML collection. DXXMQSHREDCLOB is intended for an XML document with a length of up to 1MB. Returns messages that contains XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified by an enabled XML collection. DXXMQINSERTALL is intended for XML documents with a length of up to 3KB. Returns messages that contain XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHREDALL does not require an enabled XML collection. DXXMQSHREDALL is intended for XML documents with a length of up to 3KB. Returns messages that contain XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHREDALLCLOB does not require an enabled XML collection. DXXMQSHREDALLCLOB is intended for XML documents with a length of up to 1MB. For information, see: Deprecated: Store an XML document from an MQ message queue in DB2 tables (DXXMQINSERT) on page 1146 Deprecated: Store an XML document from an MQ message queue in DB2 tables (DXXMQSHRED) on page 1148
# Table 208. MQ XML stored procedures # Stored procedure name # DXXMQINSERT # # # # # DXXMQSHRED # # # # # # DXXMQINSERTCLOB # # # # # # # DXXMQSHREDCLOB # # # # # # # # # DXXMQINSERTALL # # # # # # DXXMQSHREDALL # # # # # # # # DXXMQSHREDALLCLOB # # # # # # #
Deprecated: Store a large XML document from an MQ message queue in DB2 tables (DXXMQINSERTCLOB) on page 1151
Deprecated: Store a large XML document from an MQ message queue in DB2 tables (DXXMQSHREDCLOB) on page 1153
Deprecated: Store XML documents from an MQ message queue in DB2 tables (DXXMQINSERTALL) on page 1156 Deprecated: Store XML documents from an MQ message queue in DB2 tables (DXXMQSHREDALL) on page 1158
Deprecated: Store large XML documents from an MQ message queue in DB2 tables (DXXMQSHREDALLCLOB) on page 1161
1128
# Table 208. MQ XML stored procedures (continued) # Stored procedure name # DXXMQINSERTALLCLOB # # # # # # DXXMQGEN # # # # # # DXXMQRETRIEVE # # # # # # DXXMQGENCLOB # # # # # # DXXMQRETRIEVECLOB # # # # # # #
Function Returns messages that contains XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified by an enabled XML collection. DXXMQINSERTALLCLOB is intended for XML documents with a length of up to 1MB. Constructs XML documents from data that is stored in DB2 tables that are specified in a document access definition (DAD) file, and sends the XML documents to an MQ message queue. DXXMQGEN is intended for XML documents with a length of up to 3KB. Constructs XML documents from data that is stored in DB2 tables that are specified in an enabled XML collection, and sends the XML documents to an MQ message queue. DXXMQRETRIEVE is intended for XML documents with a length of up to 3KB. Constructs XML documents from data that is stored in DB2 tables that are specified in a document access definition (DAD) file, and sends the XML documents to an MQ message queue. DXXMQGENCLOB is intended for XML documents with a length of up to 32KB. Constructs XML documents from data that is stored in DB2 tables that are specified in an enabled XML collection, and sends the XML documents to an MQ message queue. DXXMQRETRIEVECLOB is intended for XML documents with a length of up to 32KB. For information, see: Deprecated: Store large XML documents from an MQ message queue in DB2 tables (DXXMQINSERTALLCLOB) on page 1163 Deprecated: Send XML documents to an MQ message queue (DXXMQGEN) on page 1165
Deprecated: Send large XML documents to an MQ message queue (DXXMQGENCLOB) on page 1173
1129
WLM_REFRESH uses an extended MCS console to monitor the operating system response to a WLM environment refresh request. The privilege to create an extended MCS console is controlled by the resource profile MVS.MCSOPER.* in the OPERCMDS class. If the MVS.MCSOPER.* profile exists, or if the specific profile MVS.MCSOPER.DSNTWR exists, the task ID that is associated with the WLM environment in which WLM_REFRESH runs must have READ access to it. If the MVS.VARY.* profile exists, or if the specific profile MVS.VARY.WLM exists, the task ID that is associated with the WLM environment in which WLM_REFRESH runs must have CONTROL access to it. See Part 3 (Volume 1) DB2 Administration Guide for information about authorizing access to SAF resource profiles. See z/OS MVS Planning: Operations for more information about permitting access to the extended MCS console.
, status-message, return-code )
1130
# 8 990 993
One of the following conditions exists: v The SAF resource profile ssid.WLM_REFRESH.wlm-environment is not defined in resource class DSNR. v The SQL authorization ID of the process (CURRENT SQLID) is not defined to SAF. v The wait time to obtain a response from z/OS was exceeded. The SQL authorization ID of the process (CURRENT SQLID) is not authorized to refresh the WLM environment. DSNTWR received an unexpected SQLCODE while determining the current SQLID. One of the following conditions exists: v The WLM-environment parameter value is null, blank, or contains invalid characters. v The ssid value contains invalid characters.
# #
The extended MCS console was not activated within the number of seconds indicated by message DSNT5461. DSNTWR is not running as an authorized program. DSNTWR could not activate an extended MCS console. See message DSNT533I for more information. DSNTWR made an unsuccessful request for a message from its extended MCS console. See message DSNT533I for more information. The extended MCS console for DSNTWR posted an alert. See message DSNT534I for more information. The operating system denied an authorized WLM_REFRESH request. See message DSNT545I for more information.
For a complete example of setting up access to an SAF profile and calling WLM_REFRESH, see job DSNTEJ6W, which is in data set DSN810.SDSNSAMP.
1131
The DSNACICS input parameters require knowledge of various CICS resource definitions with which the workstation programmer might not be familiar. For this reason, DSNACICS invokes the DSNACICX user exit routine. The system programmer can write a version of DSNACICX that checks and overrides the parameters that the DSNACICS caller passes. If no user version of DSNACICX is provided, DSNACICS invokes the default version of DSNACICX, which does not modify any parameters.
1132
CALL DSNACICS (
pgm-name NULL
CICS-level NULL
mirror-trans NULL
COMMAREA-total-len NULL
, return-code, msg-area )
This is an input parameter of type INTEGER. connect-type Specifies whether the CICS connection is generic or specific. Possible values are GENERIC or SPECIFIC. This is an input parameter of type CHAR(8). netname If the value of connection-type is SPECIFIC, specifies the name of the specific connection that is to be used. This value is ignored if the value of connection-type is GENERIC. This is an input parameter of type CHAR(8). mirror-trans Specifies the name of the CICS mirror transaction to invoke. This mirror transaction calls the CICS server program that is specified in the pgm-name parameter. mirror-trans must be defined to the CICS server region, and the CICS resource definition for mirror-trans must specify DFHMIRS as the program that is associated with the transaction. If this parameter contains blanks, DSNACICS passes a mirror transaction parameter value of null to the CICS EXCI interface. This allows an installation to override the transaction name in various CICS user-replaceable modules. If a CICS user exit routine does not specify a value for the mirror transaction name, CICS invokes CICS-supplied default mirror transaction CSMI.
Appendix J. DB2-supplied stored procedures
1133
This is an input parameter of type CHAR(4). COMMAREA Specifies the communication area (COMMAREA) that is used to pass data between the DSNACICS caller and the CICS server program that DSNACICS calls. This is an input/output parameter of type VARCHAR(32704). In the length field of this parameter, specify the number of bytes that DSNACICS sends to the CICS server program. commarea-total-len Specifies the total length of the COMMAREA that the server program needs. This is an input parameter of type INTEGER. This length must be greater than or equal to the value that you specify in the length field of the COMMAREA parameter and less than or equal to 32704. When the CICS server program completes, DSNACICS passes the server program's entire COMMAREA, which is commarea-total-len bytes in length, to the stored procedure caller. sync-opts Specifies whether the calling program controls resource recovery, using two-phase commit protocols that are supported by RRS. Possible values are: 1 The client program controls commit processing. The CICS server region does not perform a syncpoint when the server program returns control to CICS. Also, the server program cannot take any explicit syncpoints. Doing so causes the server program to abnormally terminate. The target CICS server region takes a syncpoint on successful completion of the server program. If this value is specified, the server program can take explicit syncpoints.
When CICS has been set up to be an RRS resource manager, the client application can control commit processing using SQL COMMIT requests. DB2 UDB for z/OS ensures that CICS is notified to commit any resources that the CICS server program modifies during two-phase commit processing. When CICS has not been set up to be an RRS resource manager, CICS forces syncpoint processing of all CICS resources at completion of the CICS server program. This commit processing is not coordinated with the commit processing of the client program. This option is ignored when CICS-level is 1. This is an input parameter of type INTEGER. return-code Return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The request to run the CICS server program failed. The msg-area parameter contains messages that describe the error.
This is an output parameter of type INTEGER. msg-area Contains messages if an error occurs during stored procedure execution. The first messages in this area are generated by the stored procedure. Messages that are generated by CICS or the DSNACICX user exit routine might follow the first messages. The messages appear as a series of concatenated, viewable text strings. This is an output parameter of type VARCHAR(500).
1134
1135
Table 210 shows the contents of the DSNACICX exit parameter list, XPL. Member DSNDXPL in data set prefix.SDSNMACS contains an assembler language mapping macro for XPL. Sample exit routine DSNASCIO in data set prefix.SDSNSAMP includes a COBOL mapping macro for XPL.
Table 210. Contents of the XPL exit parameter list Corresponding DSNACICS parameter
Name XPL_EYEC XPL_LEN XPL_LEVEL XPL_PGMNAME XPL_CICSAPPLID XPL_CICSLEVEL XPL_CONNECTTYPE XPL_NETNAME XPL_MIRRORTRAN
Hex offset Data type 0 4 8 C 14 1C 20 28 30 Character, 4 bytes Character, 4 bytes 4-byte integer Character, 8 bytes Character, 8 bytes 4-byte integer Character, 8 bytes Character, 8 bytes Character, 8 bytes
Description Eye-catcher: 'XPL ' Length of the exit parameter list Level of the parameter list Name of the CICS server program CICS VTAM applid Level of CICS code Specific or generic connection to CICS
Name of the specific connection netname to CICS Name of the mirror transaction that invokes the CICS server program Address of the COMMAREA Length of the COMMAREA that is passed to the server program Total length of the COMMAREA that is returned to the caller Syncpoint control option Return code from the exit routine Length of the output message area Output message area mirror-trans
XPL_COMMAREAPTR XPL_COMMINLEN
38 3C
1 2
XPL_COMMTOTLEN
40
4byte integer
commarea-total-len
44 48 4C 50
1136
Table 210. Contents of the XPL exit parameter list (continued) Corresponding DSNACICS parameter
Name Note:
Description
1. The area that this field points to is specified by DSNACICS parameter COMMAREA. This area does not include the length bytes. 2. This is the same value that the DSNACICS caller specifies in the length bytes of the COMMAREA parameter. 3. Although the total length of msg-area is 500 bytes, DSNACICX can use only 256 bytes of that area.
1137
/**************************************************************/ PARM_LEVEL = 1; IND_PARM_LEVEL = 0; PGM_NAME = CICSPGM1; IND_PGM_NAME = 0 ; MIRROR_TRANS = MIRT; IND_MIRROR_TRANS = 0; P1 = ADDR(COMMAREA_STG); COMMAREA_INPUT = THIS IS THE INPUT FOR CICSPGM1; COMMAREA_OUTPUT = ; COMMAREA_LEN = LENGTH(COMMAREA_INPUT); IND_COMMAREA = 0; COMMAREA_TOTAL_LEN = COMMAREA_LEN + LENGTH(COMMAREA_OUTPUT); IND_COMMAREA_TOTAL_LEN = 0; SYNC_OPTS = 1; IND_SYNC_OPTS = 0; IND_CICS_APPLID= -1; IND_CICS_LEVEL = -1; IND_CONNECT_TYPE = -1; IND_NETNAME = -1; /*****************************************/ /* INITIALIZE OUTPUT PARAMETERS TO NULL. */ /*****************************************/ IND_RETCODE = -1; IND_MSG_AREA= -1; /*****************************************/ /* CALL DSNACICS TO INVOKE CICSPGM1. */ /*****************************************/ EXEC SQL CALL SYSPROC.DSNACICS(:PARM_LEVEL :IND_PARM_LEVEL, :PGM_NAME :IND_PGM_NAME, :CICS_APPLID :IND_CICS_APPLID, :CICS_LEVEL :IND_CICS_LEVEL, :CONNECT_TYPE :IND_CONNECT_TYPE, :NETNAME :IND_NETNAME, :MIRROR_TRANS :IND_MIRROR_TRANS, :COMMAREA_STG :IND_COMMAREA, :COMMAREA_TOTAL_LEN :IND_COMMAREA_TOTAL_LEN, :SYNC_OPTS :IND_SYNC_OPTS, :RET_CODE :IND_RETCODE, :MSG_AREA :IND_MSG_AREA);
DSNACICS output
DSNACICS places the return code from DSNACICS execution in the return-code parameter. If the value of the return code is non-zero, DSNACICS puts its own error messages and any error messages that are generated by CICS and the DSNACICX user exit routine in the msg-area parameter. The COMMAREA parameter contains the COMMAREA for the CICS server program that DSNACICS calls. The COMMAREA parameter has a VARCHAR type. Therefore, if the server program puts data other than character data in the COMMAREA, that data can become corrupted by code page translation as it is passed to the caller. To avoid code page translation, you can change the COMMAREA parameter in the CREATE PROCEDURE statement for DSNACICS to VARCHAR(32704) FOR BIT DATA. However, if you do so, the client program might need to do code page translation on any character data in the COMMAREA to make it readable.
1138
DSNACICS restrictions
Because DSNACICS uses the distributed program link (DPL) function to invoke CICS server programs, server programs that you invoke through DSNACICS can contain only the CICS API commands that the DPL function supports. The list of supported commands is documented in CICS Transaction Server for z/OS Application Programming Reference.
DSNACICS debugging
If you receive errors when you call DSNACICS, ask your system administrator to add a DSNDUMP DD statement in the startup procedure for the address space in which DSNACICS runs. The DSNDUMP DD statement causes DB2 to generate an SVC dump whenever DSNACICS issues an error message. # # # # # # # # # # # # # # # # # # # # # # # #
1139
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL SYSPROC.DSNAIMS (
dsnaims-function ,
xcf-group-name ,
xcf-ims-name , racf-userid ,
racf-groupid NULL ,
ims-modname NULL ,
ims-data-out NULL
otma-tpipe-name NULL
1140
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
XRF or RSR feature, you can obtain the XCF member name from the USERVAR parameter in IMS PROCLIB member DFSPBxxx. racf-userid Specifies the RACF user ID that is used for IMS to perform the transaction or command authorization checking. This parameter is required if DSNAIMS is running APF-authorized. If DSNAIMS is running unauthorized, this parameter is ignored and the EXTERNAL SECURITY setting for the DSNAIMS stored procedure definition determines the user ID that is used by IMS. racf-groupid Specifies the RACF group ID that is used for IMS to perform the transaction or command authorization checking. racf_groupid is used for stored procedures that are APF-authorized. It is ignored for other stored procedures. ims-lterm Specifies an IMS LTERM name that is used to override the LTERM name in the I/O program communication block of the IMS application program. This field is used as an input and an output field: v For SENDRECV, the value is sent to IMS on input and can be updated by IMS on output. v For SEND, the parameter is IN only. v For RECEIVE, the parameter is OUT only. An empty or NULL value tells IMS to ignore the parameter. ims-modname Specifies the formatting map name that is used by the server to map output data streams, such as 3270 streams. Although this invocation does not have IMS MFS support, the input MODNAME can be used as the map name to define the output data stream. This name is an 8-byte message output descriptor name that is placed in the I/O program communication block. When the message is inserted, IMS places this name in the message prefix with the map name in the program communication block of the IMS application program. For SENDRECV, the value is sent to IMS on input, and can be updated on output. For SEND, the parameter is IN only. For RECEIVE it is OUT only. IMS ignores the parameter when it is an empty or NULL value. ims-tran-name Specifies the name of an IMS transaction or command that is sent to IMS. If the IMS command is longer than eight characters, specify the first eight characters (including the / of the command). Specify the remaining characters of the command in the ims-tran-name parameter. If you use an empty or NULL value, you must specify the full transaction name or command in the ims-data-in parameter. ims-data-in Specifies the data that is sent to IMS. This parameter is required in each of the following cases: v Input data is required for IMS v No transaction name or command is passed in ims-tran-name v The command is longer than eight characters This parameter is ignored when for RECEIVE functions.
1141
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
ims-data-out Data returned after successful completion of the transaction. This parameter is required for SENDRECV and RECEIVE functions. The parameter is ignored for SEND functions. otma-tpipe-name Specifies an 8-byte user-defined communication session name that IMS uses for the input and output data for the transaction or the command in a SEND or a RECEIVE function. If the otma_tpipe_name parameter is used for a SEND function to generate an IMS output message, the same otma_pipe_name must be used to retrieve output data for the subsequent RECEIVE function. otma-dru-name Specifies the name of an IMS user-defined exit routine, OTMA destination resolution user exit routine, if it is used. This IMS exit routine can format part of the output prefix and can determine the output destination for an IMS ALT_PCB output. If an empty or null value is passed, IMS ignores this parameter. user-data-in This optional parameter contains any data that is to be included in the IMS message prefix, so that the data can be accessed by IMS OTMA user exit routines (DFSYIOE0 and DFSYDRU0) and can be tracked by IMS log records. IMS applications that run in dependent regions do not access this data. The specifed user data is not included in the output message prefix. You can use this parameter to store input and output correlator tokens or other information. This parameter is ignored for RECEIEVE functions. user-data-out On output, this field contains the user-data-in in the IMS output prefix. IMS user exit routines (DFSYIOE0 and DFSYDRU0) can also create user-data-out for SENDRECV and RECEIVE functions. The parameter is not updated for SEND functions. status-message Indicates any error message that is returned from the transaction or command, OTMA, RRS, or DSNAIMS. return-code Indicates the return code that is returned for the transaction or command, OTMA, RRS, or DSNAIMS.
1142
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL SYSPROC.DSNAIMS("SEND", "N", "IMS7GRP", "IMS7TMEM", "IMSCLNM", "", "", "", "", "", "IVTNO DISPLAY LAST1 "", ims_data_out, "DSNAPIPE", "", "", user_out, error_message, rc)
Environment
DSNAEXP must run in a WLM-established stored procedure address space. Before you can invoke DSNAEXP, table sqlid.PLAN_TABLE must exist. sqlid is the value that you specify for the sqlid input parameter when you call DSNAEXP. Job DSNTESC in DSN8810.SDSNSAMP contains a sample CREATE TABLE statement for the PLAN_TABLE.
Appendix J. DB2-supplied stored procedures
1143
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Authorization required
To execute the CALL DSN8.DSNAEXP statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNAEXP v Ownership of the package v PACKADM authority for the package collection v SYSADM authority In addition: v The SQL authorization ID of the process in which DSNAEXP is called must have the authority to execute SET CURRENT SQLID=sqlid. v The SQL authorization ID of the process must also have one of the following characteristics: Be the owner of a plan table named PLAN_TABLE Have an alias on a plan table named owner.PLAN_TABLE and have SELECT and INSERT privileges on the table
CALL DSNAEXP ( sqlid, queryno, sql-statement, parse, qualifier, sqlcode, sqlstate, error-message )
1144
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
names in the input SQL statement. Valid values are 'Y' and 'N'. If the value of parse is 'Y', qualifier must contain a valid SQL qualifier name. If sql-statement is insert-within-select and common table expressions, you need to disable the parsing functionality, and add the qualifier manually. parse is an input parameter of type CHAR(1). qualifier Specifies the qualifier that DSNAEXP adds to unqualified table or view names in the input SQL statement. If the value of parse is 'N', qualifier is ignored. If the statement on which EXPLAIN is run contains an INSERT within a SELECT or a common table expression, parse must be 'N', and table and view qualifiers must be explicitly specified. qualifier is an input parameter of type CHAR(8). sqlcode Contains the SQLCODE from execution of the EXPLAIN statement. sqlcode is an output parameter of type INTEGER. sqlstate Contains the SQLSTATE from execution of the EXPLAIN statement. sqlstate is an output parameter of type CHAR(5). error-message Contains information about DSNAEXP execution. If the SQLCODE from execution of the EXPLAIN statement is not 0, error-message contains the error message for the SQLCODE. error-message is an output parameter of type VARCHAR(960).
1145
# # # # # # # # # # # # # # # # # # # # # # # # # # # #
/* Initialize the output parameters */ hvsqlcode=0; for (i = 0; i < 5; i++) hvsqlstate[i] = 0; hvsqlstate[5]=\0; hvmsg.hvmsg_len=0; for (i = 0; i < 960; i++) hvmsg.hvmsg_text[i] = ; hvmsg.hvmsg_text[960] = \0; /* Call DSNAEXP to do EXPLAIN and put output in ADMF001.PLAN_TABLE */ EXEC SQL CALL DSN8.DSNAEXP(:hvsqlid, :hvqueryno, :hvsql_stmt, :hvparse, :hvqualifier, :hvsqlcode, :hvsqlstate, :hvmsg);
DSNAEXP output
If DSNAEXP executes successfully, sqlid.PLAN_TABLE contains the EXPLAIN output. A user with SELECT authority on sqlid.PLAN_TABLE can obtain the results of the EXPLAIN that was executed by DSNAEXP by executing this query:
SELECT * FROM sqlid.PLAN_TABLE WHERE QUERYNO=queryno;
If DSNAEXP does not execute successfully, sqlcode, sqlstate, and error-message contain error information.
Deprecated: Store an XML document from an MQ message queue in DB2 tables (DXXMQINSERT)
Restriction: DXXMQINSERT has been deprecated. The DXXMQINSERT stored procedure returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified by an enabled XML collection. Use DXXMQINSERT for an XML document with a length of up to 3KB. There are two versions of DXXMQINSERT: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
1146
CALL
DMQXML1C DMQXML2C
.DXXMQINSERT (
service-name NULL
policy-name NULL
XML-collection-name, status )
1147
DB2 tables that are specified by enabled collection sales_ord. For a complete example of DXXMQINSERT invocation, see DSN8QXSI in DSN810.SDSNSAMP.
#include "dxx.h" #include "dxxrc.h" EXEC SQL INCLUDE SQLCA; EXEC SQL BEGIN DECLARE SECTION; char serviceName[48]; /* WebSphere MQ service name */ char policyName[48]; /* WebSphere MQ policy name */ char collectionName[30]; /* XML collection name */ char status[20]; /* Status of DXXMQINSERT call */ /* DXXMQINSERT is GENERAL WITH NULLS, so parameters need indicators */ short serviceName_ind; /* Indicator var for serviceName */ short policyName_ind; /* Indicator var for policyName */ short collectionName_ind; /* Indicator var for collectionName */ short status_ind; /* Indicator var for status */ EXEC SQL END DECLARE SECTION; /* Initialize status fields */ int dxx_rc=0; int dxx_sql=0; int dxx_mq=0; /* Get the service name and policy name for the MQ message queue */ strcpy(serviceName,"DB2.DEFAULT.SERVICE"); strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the XML collection name */ strcpy(collectionName,"sales_ord"); /* Initialize the output variable */ status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; /* Initialize the indicator for the output parameter */ status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQINSERT(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :overrideType:ovtype_ind, :override:ov_ind, :max_row:maxrow_ind, :num_row:numrow_ind, :status:status_ind); printf("SQLCODE from CALL: %d\n",sqlca.sqlcode); /* Get the status fields from the status parameter and print them */ sscanf(status,"%d:%d:%d",&dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=%d\n",dxx_rc,dxx_sql,dxx_mq);
DXXMQINSERT output
If DXXMQINSERT executes successfully, the mq-num-msgs field of the status parameter is set to 1, to indicate that a message was retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQINSERT does not execute successfully, the contents of the status parameter indicate the problem. # # #
Deprecated: Store an XML document from an MQ message queue in DB2 tables (DXXMQSHRED)
Restriction: DXXMQSHRED has been deprecated.
1148
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL DMQXML1C DMQXML2C
The DXXMQSHRED stored procedure returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHRED does not require an enabled XML collection. Use DXXMQSHRED for an XML document with a length of up to 3KB. There are two versions of DXXMQSHRED: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
.DXXMQSHRED (
service-name NULL
policy-name NULL
, DAD-file-name,
status )
1149
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
policy-name is not listed in the DSNAMT repository file, or policy-name is not specified, DB2.DEFAULT.POLICY is used. policy-name is an input parameter of type VARCHAR(48). policy-name cannot be blank, a null string, or have trailing blanks. DAD-file-name Specifies the name of the (DAD) file that maps the XML document to DB2 tables. DAD-file-name must be specified, and must be the name of a valid DAD file that exists on the system on which DXXMQSHRED runs. DAD-file-name is an input parameter of type VARCHAR(80). status Contains information that indicates whether DXXMQSHRED ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQSHRED ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQSHRED ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
*/ */ */ */ */ */ */ */ */ */
1150
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
/* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQSHRED(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :status:status_ind); printf("SQLCODE from CALL: %d\n",sqlca.sqlcode); /* Get the status fields from the status parameter and print them */ sscanf(status,"%d:%d:%d",&dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=%d\n",dxx_rc,dxx_sql,dxx_mq);
DXXMQSHRED output
If DXXMQSHRED executes successfully, the mq-num-msgs field of the status parameter is set to 1, to indicate that a message was retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQSHRED does not execute successfully, the contents of the status parameter indicate the problem.
Deprecated: Store a large XML document from an MQ message queue in DB2 tables (DXXMQINSERTCLOB)
Restriction: DXXMQINSERTCLOB has been deprecated. The DXXMQINSERTCLOB stored procedure returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified by an enabled XML collection. Use DXXMQINSERTCLOB for an XML document with a length of up to 1MB. There are two versions of DXXMQINSERTCLOB: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
1151
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL
DMQXML1C DMQXML2C
.DXXMQINSERTCLOB (
service-name NULL
policy-name NULL
XML-collection-name, status )
*/
1152
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
char policyName[48]; /* WebSphere MQ policy name */ char collectionName[30]; /* XML collection name */ char status[20]; /* Status of DXXMQINSERTCLOB call */ /* DXXMQINSERTCLOB is GENERAL WITH NULLS, so parameters need indicators */ short serviceName_ind; /* Indicator var for serviceName */ short policyName_ind; /* Indicator var for policyName */ short collectionName_ind; /* Indicator var for collectionName */ short status_ind; /* Indicator var for status */ EXEC SQL END DECLARE SECTION; /* Initialize status fields */ int dxx_rc=0; int dxx_sql=0; int dxx_mq=0; /* Get the service name and policy name for the MQ message queue */ strcpy(serviceName,"DB2.DEFAULT.SERVICE"); strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the XML collection name */ strcpy(collectionName,"sales_ord"); /* Initialize the output variable */ status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; /* Initialize the indicator for the output parameter */ status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQINSERTCLOB(:serviceName:serviceName_ind, :policyName:policyName_ind, :collectionName:collectionName_ind, :status:status_ind); printf("SQLCODE from CALL: %d\n",sqlca.sqlcode); /* Get the status fields from the status parameter and print them */ sscanf(status,"%d:%d:%d",&dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=%d\n",dxx_rc,dxx_sql,dxx_mq);
DXXMQINSERTCLOB output
If DXXMQINSERTCLOB executes successfully, the mq-num-msgs field of the status parameter is set to 1, to indicate that a message was retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQINSERTCLOB does not execute successfully, the contents of the status parameter indicate the problem.
Deprecated: Store a large XML document from an MQ message queue in DB2 tables (DXXMQSHREDCLOB)
Restriction: DXXMQSHREDCLOB has been deprecated. The DXXMQSHREDCLOB stored procedure returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHREDCLOB does not require an enabled XML collection. Use DXXMQSHREDCLOB for an XML document with a length of up to 1MB. There are two versions of DXXMQSHREDCLOB: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
1153
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL
DMQXML1C DMQXML2C
.DXXMQSHREDCLOB (
service-name NULL
policy-name NULL
, DAD-file-name,
status )
1154
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
status Contains information that indicates whether DXXMQSHREDCLOB ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQSHREDCLOB ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQSHREDCLOB ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
1155
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL
DXXMQSHREDCLOB output
If DXXMQSHREDCLOB executes successfully, the mq-num-msgs field of the status parameter is set to 1, to indicate that a message was retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQSHREDCLOB does not execute successfully, the contents of the status parameter indicate the problem.
Deprecated: Store XML documents from an MQ message queue in DB2 tables (DXXMQINSERTALL)
Restriction: DXXMQINSERTALL has been deprecated. The DXXMQINSERTALL stored procedure returns messages that contains XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified by an enabled XML collection. Use DXXMQINSERTALL for XML documents with a length of up to 3KB. There are two versions of DXXMQINSERTALL: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
DMQXML1C DMQXML2C
.DXXMQINSERTALL (
service-name NULL
policy-name NULL
XML-collection-name, status )
1156
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
1157
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
/* Initialize status fields */ int dxx_rc=0; int dxx_sql=0; int dxx_mq=0; /* Get the service name and policy name for the MQ message queue */ strcpy(serviceName,"DB2.DEFAULT.SERVICE"); strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the XML collection name */ strcpy(collectionName,"sales_ord"); /* Initialize the output variable */ status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; /* Initialize the indicators for the output parameter */ status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQINSERTALL(:serviceName:serviceName_ind, :policyName:policyName_ind, :collectionName:collectionName_ind, :status:status_ind); printf("SQLCODE from CALL: %d\n",sqlca.sqlcode); /* Get the status fields from the status parameter and print them */ sscanf(status,"%d:%d:%d",&dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=%d\n",dxx_rc,dxx_sql,dxx_mq);
DXXMQINSERTALL output
If DXXMQINSERTALL executes successfully, the mq-num-msgs field of the status parameter is set to the number of messages that were retrieved from the MQ message queue and decomposed. If DXXMQINSERTALL does not execute successfully, the contents of the status parameter indicate the problem.
Deprecated: Store XML documents from an MQ message queue in DB2 tables (DXXMQSHREDALL)
Restriction: DXXMQSHREDALL has been deprecated. The DXXMQSHREDALL stored procedure returns messages that contain XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHREDALL does not require an enabled XML collection. Use DXXMQSHREDALL for XML documents with a length of up to 3KB. There are two versions of DXXMQSHREDALL: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
1158
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL
DMQXML1C DMQXML2C
.DXXMQSHREDALL (
service-name NULL
policy-name NULL
, DAD-file-name,
status )
1159
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
DXXMQSHREDALL output
If DXXMQSHREDALL executes successfully, the mq-num-msgs field of the status parameter is set to the number of messages that were retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQSHREDALL does not execute successfully, the contents of the status parameter indicate the problem.
1160
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Deprecated: Store large XML documents from an MQ message queue in DB2 tables (DXXMQSHREDALLCLOB)
Restriction: DXXMQSHREDALLCLOB has been deprecated. The DXXMQSHREDALLCLOB stored procedure returns messages that contain XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHREDALLCLOB does not require an enabled XML collection. Use DXXMQSHREDALLCLOB for XML documents with a length of up to 1MB. There are two versions of DXXMQSHREDALLCLOB: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
CALL
DMQXML1C DMQXML2C
.DXXMQSHREDALLCLOB (
service-name NULL
policy-name NULL
DAD-file-name, status )
1161
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
which messages are to be retrieved. The service point is defined in the DSNAMT repository file. If service-name is not listed in the DSNAMT repository file, or service-name is not specified, DB2.DEFAULT.SERVICE is used. service-name is an input parameter of type VARCHAR(48). service-name cannot be blank, a null string, or have trailing blanks. policy-name Specifies the WebSphere MQ AMI service policy that is used to handle the messages. The service policy is defined in the DSNAMT repository file. If policy-name is not listed in the DSNAMT repository file, or policy-name is not specified, DB2.DEFAULT.POLICY is used. policy-name is an input parameter of type VARCHAR(48). policy-name cannot be blank, a null string, or have trailing blanks. DAD-file-name Specifies the name of the (DAD) file that maps the XML document to DB2 tables. DAD-file-name must be specified, and must be the name of a valid DAD file that exists on the system on which DXXMQSHREDALLCLOB runs. DAD-file-name is an input parameter of type VARCHAR(80). status Contains information that indicates whether DXXMQSHREDALLCLOB ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQSHREDALLCLOB ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQSHREDALLCLOB ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
1162
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the DAD file name */ strcpy(dadFileName,"/tmp/neworder2.dad"); /* Initialize the output variable */ status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; /* Initialize the indicator for the output parameter */ status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQSHREDALLCLOB(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :status:status_ind); printf("SQLCODE from CALL: %d\n",sqlca.sqlcode); /* Get the status fields from the status parameter and print them */ sscanf(status,"%d:%d:%d",&dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=%d\n",dxx_rc,dxx_sql,dxx_mq);
DXXMQSHREDALLCLOB output
If DXXMQSHREDALLCLOB executes successfully, the mq-num-msgs field of the status parameter is set to the number of messages that were retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQSHREDALLCLOB does not execute successfully, the contents of the status parameter indicate the problem.
Deprecated: Store large XML documents from an MQ message queue in DB2 tables (DXXMQINSERTALLCLOB)
Restriction: DXXMQINSERTALLCLOB has been deprecated. The DXXMQINSERTALLCLOB stored procedure returns messages that contains XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified by an enabled XML collection. Use DXXMQINSERTALLCLOB for XML documents with a length of up to 1MB. There are two versions of DXXMQINSERTALLCLOB: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
1163
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL DMQXML1C DMQXML2C
.DXXMQINSERTALLCLOB (
service-name NULL
policy-name NULL
XML-collection-name, status )
1164
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
DXXMQINSERTALLCLOB output
If DXXMQINSERTALLCLOB executes successfully, the mq-num-msgs field of the status parameter is set to the number of messages that were retrieved from the MQ message queue and decomposed. If DXXMQINSERTALLCLOB does not execute successfully, the contents of the status parameter indicate the problem.
1165
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL DMQXML1C DMQXML2C ,
The DXXMQGEN stored procedure constructs XML documents from data that is stored in DB2 tables that are specified in a document access definition (DAD) file, and sends the XML documents to an MQ message queue. Use DXXMQGEN for XML documents with a length of up to 3KB. There are two versions of DXXMQGEN: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
.DXXMQGEN (
policy-name NULL
DAD-file-name,
override-type NULL
override NULL
, num-msgs, status )
1166
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
policy-name is not listed in the DSNAMT repository file, or policy-name is not specified, DB2.DEFAULT.POLICY is used. policy-name is an input parameter of type VARCHAR(48). policy-name cannot be blank, a null string, or have trailing blanks. DAD-file-name Specifies the name of the (DAD) file that maps the XML documents to DB2 tables. DAD-file-name must be specified, and must be the name of a valid DAD file that exists on the system on which DXXMQGEN runs. DAD-file-name is an input parameter of type VARCHAR(80). override-type Specifies what the override parameter does. Possible values are: NO_OVERRIDE The override parameter does not override the condition in the DAD file. This is the default. SQL_OVERRIDE The DAD file uses SQL mapping, and the override parameter contains an SQL statement that overrides the SQL statement in the DAD file. XML_OVERRIDE The DAD file uses RDB_node mapping, and the override parameter contains conditions that override the RDB_node mapping in the DAD file. override-type is an input parameter of type INTEGER. The integer equivalents of the override-type values are defined in the dxx.h file. override Specifies a string that overrides the condition in the DAD file. The contents of the string depend on the value of the override-type parameter: v If override-type is NO_OVERRIDE, override contains a null string. This is the default. v If override-type is SQL_OVERRIDE, override contains a valid SQL statement that overrides the SQL statement in the DAD file. v If override-type is XML_OVERRIDE, override contains one or more expressions that are separated by AND. Each expression must be enclosed in double quotation marks. This override value overrides the RDB_node mapping in the DAD file. override is an input parameter of type VARCHAR(1024). max-rows Specifies the maximum number of XML documents that DXXMQGEN can send to the MQ message queue. The default is 1. max-rows is an input parameter of type INTEGER. num-rows The actual number of XML documents that DXXMQGEN sends to the MQ message queue. num-rows is an output parameter of type INTEGER. status Contains information that indicates whether DXXMQGEN ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where:
1167
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQGEN ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQGEN ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
1168
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQGEN(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :overrideType:ovtype_ind, :override:ov_ind, :max_row:maxrow_ind, :num_row:numrow_ind, :status:status_ind); printf("SQLCODE from CALL: %d\n",sqlca.sqlcode); /* Get the status fields from the status parameter and print them */ sscanf(status,"%d:%d:%d",&dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=%d\n",dxx_rc,dxx_sql,dxx_mq);
DXXMQGEN output
If DXXMQGEN executes successfully, the number of documents indicated by the mq-num-msgs field of the status parameter are extracted from DB2 tables and inserted into the MQ message queue. If DXXMQGEN does not execute successfully, the contents of the status parameter indicate the problem.
1169
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL DMQXML1C DMQXML2C
include a null indicator with every host variable. Null indicators for input host variables must be initialized before you execute the CALL statement.
.DXXMQRETRIEVE (
XML-collection-name,
override-type NULL
, num-msgs, status )
1170
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
NO_OVERRIDE The override parameter does not override the condition in the DAD file. This is the default. SQL_OVERRIDE The DAD file uses SQL mapping, and the override parameter contains an SQL statement that overrides the SQL statement in the DAD file. XML_OVERRIDE The DAD file uses RDB_node mapping, and the override parameter contains conditions that override the RDB_node mapping in the DAD file. override-type is an input parameter of type INTEGER. The integer equivalents of the override-type values are defined in the dxx.h file. override Specifies a string that overrides the condition in the DAD file. The contents of the string depend on the value of the override-type parameter: v If override-type is NO_OVERRIDE, override contains a null string. This is the default. v If override-type is SQL_OVERRIDE, override contains a valid SQL statement that overrides the SQL statement in the DAD file. v If override-type is XML_OVERRIDE, override contains one or more expressions that are separated by AND. Each expression must be enclosed in double quotation marks. This override value overrides the RDB_node mapping in the DAD file. override is an input parameter of type VARCHAR(1024). max-rows Specifies the maximum number of XML documents that DXXMQRETRIEVE can send to the MQ message queue. The default is 1. max-rows is an input parameter of type INTEGER. num-rows The actual number of XML documents that DXXMQRETRIEVE sends to the MQ message queue. num-rows is an output parameter of type INTEGER. status Contains information that indicates whether DXXMQRETRIEVE ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQRETRIEVE ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQRETRIEVE ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
1171
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
program does not override the definitions in the DAD file. For a complete example of DXXMQRETRIEVE invocation, see DSN8QXGR in DSN810.SDSNSAMP.
#include "dxx.h" #include "dxxrc.h" EXEC SQL INCLUDE SQLCA; EXEC SQL BEGIN DECLARE SECTION; char serviceName[48]; /* WebSphere MQ service name */ char policyName[48]; /* WebSphere MQ policy name */ char collectionName[80]; /* XML collection name */ short overrideType; /* defined in dxx.h */ char override[2]; /* Override string for DAD */ short max_row; /* Maximum number of documents*/ short num_row; /* Actual number of documents */ char status[20]; /* Status of DXXMQRETRIEVE call */ /* DXXMQRETRIEVE is GENERAL WITH NULLS, so parameters need indicators */ short serviceName_ind; /* Indicator var for serviceName */ short policyName_ind; /* Indicator var for policyName */ short collectionName_ind; /* Indicator var for dadFileName */ short ovtype_ind; /* Indicator var for overrideType */ short ov_ind; /* Indicator var for override */ short maxrow_ind; /* Indicator var for maxrow */ short numrow_ind; /* Indicator var for numrow */ short status_ind; /* Indicator var for status */ EXEC SQL END DECLARE SECTION; /* Status fields */ int dxx_rc=0; int dxx_sql=0; int dxx_mq=0; /* Get the service name and policy name for the MQ message queue */ strcpy(serviceName,"DB2.DEFAULT.SERVICE"); strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the XML collection name */ strcpy(collectionName,"sales_ord"); /* Put null in the override parameter because we are not going */ /* to override the values in the DAD file */ override[0] = \0; overrideType = NO_OVERRIDE; /* Indicate that we do not want to transfer more than 500 */ /* documents */ max_row = 500; /* Initialize the output variables */ num_row = 0; status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; maxrow_ind = 0; /* Initialize the indicators for the output parameters */ numrow_ind = -1; status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQRETRIEVE(:serviceName:serviceName_ind, :policyName:policyName_ind, :collectionName:collectionName_ind, :overrideType:ovtype_ind, :override:ov_ind, :max_row:maxrow_ind, :num_row:numrow_ind, :status:status_ind); printf("SQLCODE from CALL: %d\n",sqlca.sqlcode); /* Get the status fields from the status parameter and print them */ sscanf(status,"%d:%d:%d",&dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=%d\n",dxx_rc,dxx_sql,dxx_mq);
1172
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL
DXXMQRETRIEVE output
If DXXMQRETRIEVE executes successfully, the number of documents indicated by the mq-num-msgs field of the status parameter are extracted from DB2 tables and inserted into the MQ message queue. If DXXMQRETRIEVE does not execute successfully, the contents of the status parameter indicate the problem.
DMQXML1C DMQXML2C ,
.DXXMQGENCLOB (
policy-name NULL
, DAD-file-name,
override-type NULL
override NULL
, num-msgs, status )
1173
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
1174
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
max-rows Specifies the maximum number of XML documents that DXXMQGENCLOB can send to the MQ message queue. The default is 1. max-rows is an input parameter of type INTEGER. num-rows The actual number of XML documents that DXXMQGENCLOB sends to the MQ message queue. num-rows is an output parameter of type INTEGER. status Contains information that indicates whether DXXMQGENCLOB ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQGENCLOB ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQGENCLOB ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
*/
1175
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
/* Put null in the override parameter because we are not going */ /* to override the values in the DAD file */ override[0] = \0; overrideType = NO_OVERRIDE; /* Indicate that we do not want to transfer more than 500 */ /* documents */ max_row = 500; /* Initialize the output variables */ num_row = 0; status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ dadFileName_ind = 0; serviceName_ind = 0; policyName_ind = 0; maxrow_ind = 0; /* Initialize the indicators for the output parameters */ numrow_ind = -1; status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML2C.DXXMQGENCLOB(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :overrideType:ovtype_ind, :override:ov_ind, :max_row:maxrow_ind, :num_row:numrow_ind, :status:status_ind); printf("SQLCODE from CALL: %d\n",sqlca.sqlcode); /* Get the status fields from the status parameter and print them */ sscanf(status,"%d:%d:%d",&dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=%d\n",dxx_rc,dxx_sql,dxx_mq);
DXXMQGENCLOB output
If DXXMQGENCLOB executes successfully, the number of documents indicated by the mq-num-msgs field of the status parameter are extracted from DB2 tables and inserted into the MQ message queue. If DXXMQGENCLOB does not execute successfully, the contents of the status parameter indicate the problem.
1176
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL DMQXML1C DMQXML2C
DXXMQRETRIEVECLOB requires that WebSphere MQ and XML Extender are installed. See DB2 Installation Guide for installation instructions.
.DXXMQRETRIEVECLOB (
policy-name NULL
XML-collection-name,
override-type NULL
max-rows NULL
, num-msgs, status )
1177
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
override-type Specifies what the override parameter does. Possible values are: NO_OVERRIDE The override parameter does not override the condition in the DAD file. This is the default. SQL_OVERRIDE The DAD file uses SQL mapping, and the override parameter contains an SQL statement that overrides the SQL statement in the DAD file. XML_OVERRIDE The DAD file uses RDB_node mapping, and the override parameter contains conditions that override the RDB_node mapping in the DAD file. override-type is an input parameter of type INTEGER. The integer equivalents of the override-type values are defined in the dxx.h file. override Specifies a string that overrides the condition in the DAD file. The contents of the string depend on the value of the override-type parameter: v If override-type is NO_OVERRIDE, override contains a null string. This is the default. v If override-type is SQL_OVERRIDE, override contains a valid SQL statement that overrides the SQL statement in the DAD file. v If override-type is XML_OVERRIDE, override contains one or more expressions that are separated by AND. Each expression must be enclosed in double quotation marks. This override value overrides the RDB_node mapping in the DAD file. override is an input parameter of type VARCHAR(1024). max-rows Specifies the maximum number of XML documents that DXXMQRETRIEVECLOB can send to the MQ message queue. The default is 1. max-rows is an input parameter of type INTEGER. num-rows The actual number of XML documents that DXXMQRETRIEVECLOB sends to the MQ message queue. num-rows is an output parameter of type INTEGER. status Contains information that indicates whether DXXMQRETRIEVECLOB ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQRETRIEVECLOB ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQRETRIEVECLOB ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
1178
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
1179
# # # # # # # # #
printf("SQLCODE from CALL: %d\n",sqlca.sqlcode); /* Get the status fields from the status parameter and print them */ sscanf(status,"%d:%d:%d",&dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=%d\n",dxx_rc,dxx_sql,dxx_mq);
DXXMQRETRIEVECLOB output
If DXXMQRETRIEVECLOB executes successfully, the number of documents indicated by the mq-num-msgs field of the status parameter are extracted from DB2 tables and inserted into the MQ message queue. If DXXMQRETRIEVECLOB does not execute successfully, the contents of the status parameter indicate the problem.
1180
1181
v How to write applications in the Java programming language to access DB2 servers The material needed for writing a host program containing SQL is in DB2 Application Programming and SQL Guide and in DB2 Application Programming Guide and Reference for Java. The material needed for writing applications that use DB2 ODBC or ODBC to access DB2 servers is in DB2 ODBC Guide and Reference. For handling errors, see DB2 Codes. If you will be working in a distributed environment, you will need DB2 Reference for Remote DRDA Requesters and Servers. Information about writing applications across operating systems can be found in IBM DB2 Universal Database SQL Reference for Cross-Platform Development. System and database administration: Administration covers almost everything else. DB2 Administration Guide divides those tasks among the following sections: v Part 2 (Volume 1) of DB2 Administration Guide discusses the decisions that must be made when designing a database and tells how to implement the design by creating and altering DB2 objects, loading data, and adjusting to changes. v Part 3 (Volume 1) of DB2 Administration Guide describes ways of controlling access to the DB2 system and to data within DB2, to audit aspects of DB2 usage, and to answer other security and auditing concerns. v Part 4 (Volume 1) of DB2 Administration Guide describes the steps in normal day-to-day operation and discusses the steps one should take to prepare for recovery in the event of some failure. v Part 5 (Volume 2) of DB2 Administration Guide explains how to monitor the performance of the DB2 system and its parts. It also lists things that can be done to make some parts run faster. If you will be using the RACF access control module for DB2 authorization checking, you will need DB2 RACF Access Control Module Guide. If you are involved with DB2 only to design the database, or plan operational procedures, you need DB2 Administration Guide. If you also want to carry out your own plans by creating DB2 objects, granting privileges, running utility jobs, and so on, you also need: v DB2 SQL Reference, which describes the SQL statements you use to create, alter, and drop objects and grant and revoke privileges v DB2 Utility Guide and Reference, which explains how to run utilities v DB2 Command Reference, which explains how to run commands If you will be using data sharing, you need DB2 Data Sharing: Planning and Administration, which describes how to plan for and implement data sharing. Additional information about system and database administration can be found in DB2 Messages and DB2 Codes, which list messages and codes issued by DB2, with explanations and suggested responses. Diagnosis: Diagnosticians detect and describe errors in the DB2 program. They might also recommend or apply a remedy. The documentation for this task is in DB2 Diagnosis Guide and Reference, DB2 Messages, and DB2 Codes.
1182
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the users responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106-0032, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.
Copyright IBM Corp. 1983, 2008
1183
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation J46A/G4 555 Bailey Avenue San Jose, CA 95141-1003 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
1184
implementation, it is to be expected that programs written to such interfaces may need to be changed in order to run with new product releases or versions, or as a result of service. Product-sensitive Programming Interface and Associated Guidance Information is identified where it occurs, either by an introductory statement to a chapter or section or by the following marking: Product-sensitive Programming Interface Product-sensitive Programming Interface and Associated Guidance Information ... End of Product-sensitive Programming Interface
Trademarks
The following terms are trademarks of International Business Machines Corporation in the United States, other countries, or both:
BookManager CICS CICS Connection CT DataPropagator DB2 DB2 Connect DB2 Universal Database DFSMSdfp DFSMSdss DFSMShsm Distributed Relational Database Architecture DRDA Enterprise Storage Server ES/3090 eServer FlashCopy IBM IBM Registry IMS iSeries Language Environment MVS MVS/ESA Notes OS/390 Parallel Sysplex PR/SM QMF RACF Redbooks System/390 TotalStorage VTAM WebSphere z/OS
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries.
Notices
1185
1186
Glossary
The following terms and abbreviations are defined as they are used in the DB2 library.
after trigger. A trigger that is defined with the trigger activation time AFTER. agent. As used in DB2, the structure that associates all processes that are involved in a DB2 unit of work. An allied agent is generally synonymous with an allied thread. System agents are units of work that process tasks that are independent of the allied agent, such as prefetch processing, deferred writes, and service tasks.
A
abend. Abnormal end of task. abend reason code. A 4-byte hexadecimal code that uniquely identifies a problem with DB2. abnormal end of task (abend). Termination of a task, job, or subsystem because of an error condition that recovery facilities cannot resolve during execution. access method services. The facility that is used to define and reproduce VSAM key-sequenced data sets. access path. The path that is used to locate data that is specified in SQL statements. An access path can be indexed or sequential. active log. The portion of the DB2 log to which log records are written as they are generated. The active log always contains the most recent log records, whereas the archive log holds those records that are older and no longer fit on the active log. active member state. A state of a member of a data sharing group. The cross-system coupling facility identifies each active member with a group and associates the member with a particular task, address space, and z/OS system. A member that is not active has either a failed member state or a quiesced member state. address space. A range of virtual storage pages that is identified by a number (ASID) and a collection of segment and page tables that map the virtual pages to real pages of the computers memory. address space connection. The result of connecting an allied address space to DB2. Each address space that contains a task that is connected to DB2 has exactly one address space connection, even though more than one task control block (TCB) can be present. See also allied address space and task control block.
# aggregate function. An operation that derives its # result by using values from one or more rows. Contrast # with scalar function.
alias. An alternative name that can be used in SQL statements to refer to a table or view in the same or a remote DB2 subsystem. allied address space. An area of storage that is external to DB2 and that is connected to DB2. An allied address space is capable of requesting DB2 services. allied thread. A thread that originates at the local DB2 subsystem and that can access data at a remote DB2 subsystem. allocated cursor. A cursor that is defined for stored procedure result sets by using the SQL ALLOCATE CURSOR statement. already verified. An LU 6.2 security option that allows DB2 to provide the users verified authorization ID when allocating a conversation. With this option, the user is not validated by the partner DB2 subsystem.
| | | | | | | | |
ambiguous cursor. A database cursor that is in a plan or package that contains either PREPARE or EXECUTE IMMEDIATE SQL statements, and for which the following statements are true: the cursor is not defined with the FOR READ ONLY clause or the FOR UPDATE OF clause; the cursor is not defined on a read-only result table; the cursor is not the target of a WHERE CURRENT clause on an SQL UPDATE or DELETE statement. American National Standards Institute (ANSI). An organization consisting of producers, consumers, and general interest groups, that establishes the procedures by which accredited organizations create and maintain voluntary industry standards in the United States. ANSI. American National Standards Institute. APAR. Authorized program analysis report. APAR fix corrective service. A temporary correction of an IBM software defect. The correction is temporary,
| |
address space identifier (ASID). A unique system-assigned identifier for and address space. administrative authority. A set of related privileges that DB2 defines. When you grant one of the administrative authorities to a persons ID, the person has all of the privileges that are associated with that administrative authority.
1187
because it is usually replaced at a later date by a more permanent correction, such as a program temporary fix (PTF). APF. Authorized program facility. API. Application programming interface. APPL. A VTAM network definition statement that is used to define DB2 to VTAM as an application program that uses SNA LU 6.2 protocols. application. A program or set of programs that performs a task; for example, a payroll application. application-directed connection. A connection that an application manages using the SQL CONNECT statement. application plan. The control structure that is produced during the bind process. DB2 uses the application plan to process SQL statements that it encounters during statement execution. application process. The unit to which resources and locks are allocated. An application process involves the execution of one or more programs. application programming interface (API). A functional interface that is supplied by the operating system or by a separately orderable licensed program that allows an application program that is written in a high-level language to use specific data or functions of the operating system or licensed program. application requester. The component on a remote system that generates DRDA requests for data on behalf of an application. An application requester accesses a DB2 database server using the DRDA application-directed protocol. application server. The target of a request from a remote application. In the DB2 environment, the application server function is provided by the distributed data facility and is used to access DB2 data from remote applications. archive log. The portion of the DB2 log that contains log records that have been copied from the active log. ASCII. An encoding scheme that is used to represent strings in many environments, typically on PCs and workstations. Contrast with EBCDIC and Unicode.
authorization ID. A string that can be verified for connection to DB2 and to which a set of privileges is allowed. It can represent an individual, an organizational group, or a function, but DB2 does not determine this representation. authorized program analysis report (APAR). A report of a problem that is caused by a suspected defect in a current release of an IBM supplied program. authorized program facility (APF). A facility that permits the identification of programs that are authorized to use restricted functions.
| | | | | | |
automatic query rewrite. A process that examines an SQL statement that refers to one or more base tables, and, if appropriate, rewrites the query so that it performs better. This process can also determine whether to rewrite a query so that it refers to one or more materialized query tables that are derived from the source tables. auxiliary index. An index on an auxiliary table in which each index entry refers to a LOB. auxiliary table. A table that stores columns outside the table in which they are defined. Contrast with base table.
B
backout. The process of undoing uncommitted changes that an application process made. This might be necessary in the event of a failure on the part of an application process, or as a result of a deadlock situation. backward log recovery. The fourth and final phase of restart processing during which DB2 scans the log in a backward direction to apply UNDO log records for all aborted changes. base table. (1) A table that is created by the SQL CREATE TABLE statement and that holds persistent data. Contrast with result table and temporary table. (2) A table containing a LOB column definition. The actual LOB column data is not stored with the base table. The base table contains a row identifier for each row and an indicator column for each of its LOB columns. Contrast with auxiliary table. base table space. A table space that contains base tables. basic predicate. A predicate that compares two values. basic sequential access method (BSAM). An access method for storing or retrieving data blocks in a continuous sequence, using either a sequential-access or a direct-access device.
1188
| | | |
batch message processing program. In IMS, an application program that can perform batch-type processing online and can access the IMS input and output message queues. before trigger. A trigger that is defined with the trigger activation time BEFORE. binary integer. A basic data type that can be further classified as small integer or large integer.
BSDS. Bootstrap data set. buffer pool. Main storage that is reserved to satisfy the buffering requirements for one or more table spaces or indexes. built-in data type. A data type that IBM supplies. Among the built-in data types for DB2 UDB for z/OS are string, numeric, ROWID, and datetime. Contrast with distinct type. built-in function. A function that DB2 supplies. Contrast with user-defined function. business dimension. A category of data, such as products or time periods, that an organization might want to analyze.
# binary large object (BLOB). A sequence of bytes in # which the size of the value ranges from 0 bytes to # 2 GB1. Such a string has a CCSID value of 65535.
binary string. A sequence of bytes that is not associated with a CCSID. For example, the BLOB data type is a binary string. bind. The process by which the output from the SQL precompiler is converted to a usable control structure, often called an access plan, application plan, or package. During this process, access paths to the data are selected and some authorization checking is performed. The types of bind are: automatic bind. (More correctly, automatic rebind) A process by which SQL statements are bound automatically (without a user issuing a BIND command) when an application process begins execution and the bound application plan or package it requires is not valid. dynamic bind. A process by which SQL statements are bound as they are entered. incremental bind. A process by which SQL statements are bound during the execution of an application process. static bind. A process by which SQL statements are bound after they have been precompiled. All static SQL statements are prepared for execution at the same time.
C
cache structure. A coupling facility structure that stores data that can be available to all members of a Sysplex. A DB2 data sharing group uses cache structures as group buffer pools. CAF. Call attachment facility. call attachment facility (CAF). A DB2 attachment facility for application programs that run in TSO or z/OS batch. The CAF is an alternative to the DSN command processor and provides greater control over the execution environment. call-level interface (CLI). A callable application programming interface (API) for database access, which is an alternative to using embedded SQL. In contrast to embedded SQL, DB2 ODBC (which is based on the CLI architecture) does not require the user to precompile or bind applications, but instead provides a standard set of functions to process SQL statements and related services at run time. cascade delete. The way in which DB2 enforces referential constraints when it deletes all descendent rows of a deleted parent row. CASE expression. An expression that is selected based on the evaluation of one or more conditions. cast function. A function that is used to convert instances of a (source) data type into instances of a different (target) data type. In general, a cast function has the name of the target data type. It has one single argument whose type is the source data type; its return type is the target data type. castout. The DB2 process of writing changed pages from a group buffer pool to disk. castout owner. The DB2 member that is responsible for casting out a particular page set or partition.
# bit data. Data that is character type CHAR or # VARCHAR and has a CCSID value of 65535.
BLOB. Binary large object. block fetch. A capability in which DB2 can retrieve, or fetch, a large set of rows together. Using block fetch can significantly reduce the number of messages that are being sent across the network. Block fetch applies only to cursors that do not update data. BMP. Batch Message Processing (IMS). See batch message processing program. bootstrap data set (BSDS). A VSAM data set that contains name and status information for DB2, as well as RBA range specifications, for all active and archive log data sets. It also contains passwords for the DB2 directory and catalog, and lists of conditional restart and checkpoint records. BSAM. Basic sequential access method.
Glossary
1189
catalog. In DB2, a collection of tables that contains descriptions of objects such as tables, views, and indexes. catalog table. Any table in the DB2 catalog. CCSID. Coded character set identifier. CDB. Communications database. CDRA. Character Data Representation Architecture. CEC. Central electronic complex. See central processor complex. central electronic complex (CEC). See central processor complex. central processor (CP). The part of the computer that contains the sequencing and processing facilities for instruction execution, initial program load, and other machine operations. central processor complex (CPC). A physical collection of hardware (such as an ES/3090) that consists of main storage, one or more central processors, timers, and channels.
| | | |
check pending. A state of a table space or partition that prevents its use by some utilities and by some SQL statements because of rows that violate referential constraints, check constraints, or both. checkpoint. A point at which DB2 records internal status information on the DB2 log; the recovery process uses this information if DB2 abnormally terminates.
| | | |
child lock. For explicit hierarchical locking, a lock that is held on either a table, page, row, or a large object (LOB). Each child lock has a parent lock. See also parent lock. CI. Control interval.
| CICS. Represents (in this publication): CICS | Transaction Server for z/OS: Customer Information | Control System Transaction Server for z/OS.
CICS attachment facility. A DB2 subcomponent that uses the z/OS subsystem interface (SSI) and cross-storage linkage to process requests from CICS to DB2 and to coordinate resource commitment. CIDF. Control interval definition field. claim. A notification to DB2 that an object is being accessed. Claims prevent drains from occurring until the claim is released, which usually occurs at a commit point. Contrast with drain. claim class. A specific type of object access that can be one of the following isolation levels: Cursor stability (CS) Repeatable read (RR) Write claim count. A count of the number of agents that are accessing an object. class of service. A VTAM term for a list of routes through a network, arranged in an order of preference for their use. class word. A single word that indicates the nature of a data attribute. For example, the class word PROJ indicates that the attribute identifies a project. clause. In SQL, a distinct part of a statement, such as a SELECT clause or a WHERE clause. CLI. Call- level interface. client. See requester. CLIST. Command list. A language for performing TSO tasks. CLOB. Character large object. closed application. An application that requires exclusive use of certain statements on certain DB2
1190
objects, so that the objects are managed solely through the applications external interface. CLPA. Create link pack area.
command. A DB2 operator command or a DSN subcommand. A command is distinct from an SQL statement. command prefix. A one- to eight-character command identifier. The command prefix distinguishes the command as belonging to an application or subsystem rather than to MVS. command recognition character (CRC). A character that permits a z/OS console operator or an IMS subsystem user to route DB2 commands to specific DB2 subsystems. command scope. The scope of command operation in a data sharing group. If a command has member scope, the command displays information only from the one member or affects only non-shared resources that are owned locally by that member. If a command has group scope, the command displays information from all members, affects non-shared resources that are owned locally by all members, displays information on sharable resources, or affects sharable resources. commit. The operation that ends a unit of work by releasing locks so that the database changes that are made by that unit of work can be perceived by other processes. commit point. A point in time when data is considered consistent. committed phase. The second phase of the multisite update process that requests all participants to commit the effects of the logical unit of work. common service area (CSA). In z/OS, a part of the common area that contains data areas that are addressable by all address spaces. communications database (CDB). A set of tables in the DB2 catalog that are used to establish conversations with remote database management systems. comparison operator. A token (such as =, >, or <) that is used to specify a relationship between two values. composite key. An ordered set of key columns of the same table. compression dictionary. The dictionary that controls the process of compression and decompression. This dictionary is created from the data in the table space or table space partition. concurrency. The shared use of resources by more than one application process at the same time. conditional restart. A DB2 restart that is directed by a user-defined conditional restart control record (CRCR). connection. In SNA, the existence of a communication path between two partner LUs that allows information
| | | | | | | |
clustering index. An index that determines how rows are physically ordered (clustered) in a table space. If a clustering index on a partitioned table is not a partitioning index, the rows are ordered in cluster sequence within each data partition instead of spanning partitions. Prior to Version 8 of DB2 UDB for z/OS, the partitioning index was required to be the clustering index. coded character set. A set of unambiguous rules that establish a character set and the one-to-one relationships between the characters of the set and their coded representations. coded character set identifier (CCSID). A 16-bit number that uniquely identifies a coded representation of graphic characters. It designates an encoding scheme identifier and one or more pairs consisting of a character set identifier and an associated code page identifier. code page. (1) A set of assignments of characters to code points. In EBCDIC, for example, the character 'A' is assigned code point X'C1' (2) , and character 'B' is assigned code point X'C2'. Within a code page, each code point has only one specific meaning. code point. In CDRA, a unique bit pattern that represents a character in a code page.
# # # # # #
code unit. The fundamental binary width in a computer architecture that is used for representing character data, such as 7 bits, 8 bits, 16 bits, or 32 bits. Depending on the character encoding form that is used, each code point in a coded character set can be represented internally by one or more code units. coexistence. During migration, the period of time in which two releases exist in the same data sharing group. cold start. A process by which DB2 restarts without processing any log records. Contrast with warm start. collection. A group of packages that have the same qualifier. column. The vertical component of a table. A column has a name and a particular data type (for example, character, decimal, or integer).
Glossary
1191
to be exchanged (for example, two DB2 subsystems that are connected and communicating by way of a conversation). connection context. In SQLJ, a Java object that represents a connection to a data source. connection declaration clause. In SQLJ, a statement that declares a connection to a data source. connection handle. The data object containing information that is associated with a connection that DB2 ODBC manages. This includes general status information, transaction status, and diagnostic information. connection ID. An identifier that is supplied by the attachment facility and that is associated with a specific address space connection. consistency token. A timestamp that is used to generate the version identifier for an application. See also version. constant. A language element that specifies an unchanging value. Constants are classified as string constants or numeric constants. Contrast with variable. constraint. A rule that limits the values that can be inserted, deleted, or updated in a table. See referential constraint, check constraint, and unique constraint. context. The applications logical connection to the data source and associated internal DB2 ODBC connection information that allows the application to direct its operations to a data source. A DB2 ODBC context represents a DB2 thread. contracting conversion. A process that occurs when the length of a converted string is smaller than that of the source string. For example, this process occurs when an EBCDIC mixed-data string that contains DBCS characters is converted to ASCII mixed data; the converted string is shorter because of the removal of the shift codes. control interval (CI). A fixed-length area or disk in which VSAM stores records and creates distributed free space. Also, in a key-sequenced data set or file, the set of records that an entry in the sequence-set index record points to. The control interval is the unit of information that VSAM transmits to or from disk. A control interval always includes an integral number of physical records. control interval definition field (CIDF). In VSAM, a field that is located in the 4 bytes at the end of each control interval; it describes the free space, if any, in the control interval. conversation. Communication, which is based on LU 6.2 or Advanced Program-to-Program Communication (APPC), between an application and a remote
transaction program over an SNA logical unit-to-logical unit (LU-LU) session that allows communication while processing a transaction. coordinator. The system component that coordinates the commit or rollback of a unit of work that includes work that is done on one or more other systems.
| | | | | | | | | | | | | | | |
copy pool. A named set of SMS storage groups that contains data that is to be copied collectively. A copy pool is an SMS construct that lets you define which storage groups are to be copied by using FlashCopy functions. HSM determines which volumes belong to a copy pool. copy target. A named set of SMS storage groups that are to be used as containers for copy pool volume copies. A copy target is an SMS construct that lets you define which storage groups are to be used as containers for volumes that are copied by using FlashCopy functions. copy version. A point-in-time FlashCopy copy that is managed by HSM. Each copy pool has a version parameter that specifies how many copy versions are maintained on disk. correlated columns. A relationship between the value of one column and the value of another column. correlated subquery. A subquery (part of a WHERE or HAVING clause) that is applied to a row or group of rows of a table or view that is named in an outer subselect statement. correlation ID. An identifier that is associated with a specific thread. In TSO, it is either an authorization ID or the job name. correlation name. An identifier that designates a table, a view, or individual rows of a table or view within a single SQL statement. It can be defined in any FROM clause or in the first clause of an UPDATE or DELETE statement. cost category. A category into which DB2 places cost estimates for SQL statements at the time the statement is bound. A cost estimate can be placed in either of the following cost categories: v A: Indicates that DB2 had enough information to make a cost estimate without using default values. v B: Indicates that some condition exists for which DB2 was forced to use default values for its estimate. The cost category is externalized in the COST_CATEGORY column of the DSN_STATEMNT_TABLE when a statement is explained. coupling facility. A special PR/SM LPAR logical partition that runs the coupling facility control program and provides high-speed caching, list processing, and locking functions in a Parallel Sysplex.
1192
| | | | | |
coupling facility resource management. A component of z/OS that provides the services to manage coupling facility resources in a Parallel Sysplex. This management includes the enforcement of CFRM policies to ensure that the coupling facility and structure requirements are satisfied. CP. Central processor. CPC. Central processor complex. C++ member. A data object or function in a structure, union, or class. C++ member function. An operator or function that is declared as a member of a class. A member function has access to the private and protected data members and to the member functions of objects in its class. Member functions are also called methods. C++ object. (1) A region of storage. An object is created when a variable is defined or a new function is invoked. (2) An instance of a class. CRC. Command recognition character. CRCR. Conditional restart control record. See also conditional restart. create link pack area (CLPA). An option that is used during IPL to initialize the link pack pageable area. created temporary table. A table that holds temporary data and is defined with the SQL statement CREATE GLOBAL TEMPORARY TABLE. Information about created temporary tables is stored in the DB2 catalog, so this kind of table is persistent and can be shared across application processes. Contrast with declared temporary table. See also temporary table. cross-memory linkage. A method for invoking a program in a different address space. The invocation is synchronous with respect to the caller. cross-system coupling facility (XCF). A component of z/OS that provides functions to support cooperation between authorized programs that run within a Sysplex. cross-system extended services (XES). A set of z/OS services that allow multiple instances of an application or subsystem, running on different systems in a Sysplex environment, to implement high-performance, high-availability data sharing by using a coupling facility. CS. Cursor stability. CSA. Common service area. CT. Cursor table.
current data. Data within a host structure that is current with (identical to) the data within the base table. current SQL ID. An ID that, at a single point in time, holds the privileges that are exercised when certain dynamic SQL statements run. The current SQL ID can be a primary authorization ID or a secondary authorization ID. current status rebuild. The second phase of restart processing during which the status of the subsystem is reconstructed from information on the log. cursor. A named control structure that an application program uses to point to a single row or multiple rows within some ordered set of rows of a result table. A cursor can be used to retrieve, update, or delete rows from a result table. cursor sensitivity. The degree to which database updates are visible to the subsequent FETCH statements in a cursor. A cursor can be sensitive to changes that are made with positioned update and delete statements specifying the name of that cursor. A cursor can also be sensitive to changes that are made with searched update or delete statements, or with cursors other than this cursor. These changes can be made by this application process or by another application process. cursor stability (CS). The isolation level that provides maximum concurrency without the ability to read uncommitted data. With cursor stability, a unit of work holds locks only on its uncommitted changes and on the current row of each of its cursors. cursor table (CT). The copy of the skeleton cursor table that is used by an executing application process. cycle. A set of tables that can be ordered so that each table is a descendent of the one before it, and the first table is a descendent of the last table. A self-referencing table is a cycle with a single member.
D
| DAD. See Document access definition. | disk. A direct-access storage device that records data | magnetically.
database. A collection of tables, or a collection of table spaces and index spaces. database access thread. A thread that accesses data at the local subsystem on behalf of a remote subsystem. database administrator (DBA). An individual who is responsible for designing, developing, operating, safeguarding, maintaining, and using a database.
Glossary
1193
| | | | | | | | | | | | | | | | |
database alias. The name of the target server if different from the location name. The database alias name is used to provide the name of the database server as it is known to the network. When a database alias name is defined, the location name is used by the application to reference the server, but the database alias name is used to identify the database server to be accessed. Any fully qualified object names within any SQL statements are not modified and are sent unchanged to the database server. database descriptor (DBD). An internal representation of a DB2 database definition, which reflects the data definition that is in the DB2 catalog. The objects that are defined in a database descriptor are table spaces, tables, indexes, index spaces, relationships, check constraints, and triggers. A DBD also contains information about accessing tables in the database. database exception status. An indication that something is wrong with a database. All members of a data sharing group must know and share the exception status of databases.
Data Language/I (DL/I). The IMS data manipulation language; a common high-level interface between a user application and IMS. data mart. A small data warehouse that applies to a single department or team. See also data warehouse. data mining. The process of collecting critical business information from a data warehouse, correlating it, and uncovering associations, patterns, and trends. data partition. A VSAM data set that is contained within a partitioned table space. data-partitioned secondary index (DPSI). A secondary index that is partitioned. The index is partitioned according to the underlying data. data sharing. The ability of two or more DB2 subsystems to directly access and change a single set of data. data sharing group. A collection of one or more DB2 subsystems that directly access and change the same data while maintaining data integrity. data sharing member. A DB2 subsystem that is assigned by XCF services to a data sharing group. data source. A local or remote relational or non-relational data manager that is capable of supporting data access via an ODBC driver that supports the ODBC APIs. In the case of DB2 UDB for z/OS, the data sources are always relational database managers.
| | | | | |
data space. In releases prior to DB2 UDB for z/OS, Version 8, a range of up to 2 GB of contiguous virtual storage addresses that a program can directly manipulate. Unlike an address space, a data space can hold only data; it does not contain common areas, system data, or programs. data type. An attribute of columns, literals, host variables, special registers, and the results of functions and expressions. data warehouse. A system that provides critical business information to an organization. The data warehouse system cleanses the data for accuracy and currency, and then presents the data to decision makers so that they can interpret and use it effectively and efficiently. date. A three-part value that designates a day, month, and year. date duration. A decimal integer that represents a number of years, months, and days. datetime value. A value of the data type DATE, TIME, or TIMESTAMP. DBA. Database administrator.
1194
DBCLOB. Double-byte character large object. DBCS. Double-byte character set. DBD. Database descriptor. DBID. Database identifier. DBMS. Database management system. DBRM. Database request module. DB2 catalog. Tables that are maintained by DB2 and contain descriptions of DB2 objects, such as tables, views, and indexes. DB2 command. An instruction to the DB2 subsystem that a user enters to start or stop DB2, to display information on current users, to start or stop databases, to display information on the status of databases, and so on. DB2 for VSE & VM. The IBM DB2 relational database management system for the VSE and VM operating systems. DB2I. DB2 Interactive. DB2 Interactive (DB2I). The DB2 facility that provides for the execution of SQL statements, DB2 (operator) commands, programmer commands, and utility invocation. DB2I Kanji Feature. The tape that contains the panels and jobs that allow a site to display DB2I panels in Kanji. DB2 PM. DB2 Performance Monitor. DB2 thread. The DB2 structure that describes an applications connection, traces its progress, processes resource functions, and delimits its accessibility to DB2 resources and services. DCLGEN. Declarations generator. DDF. Distributed data facility. ddname. Data definition name. deadlock. Unresolvable contention for the use of a resource, such as a table or an index. declarations generator (DCLGEN). A subcomponent of DB2 that generates SQL table declarations and COBOL, C, or PL/I data structure declarations that conform to the table. The declarations are generated from DB2 system catalog information. DCLGEN is also a DSN subcommand. declared temporary table. A table that holds temporary data and is defined with the SQL statement DECLARE GLOBAL TEMPORARY TABLE. Information about declared temporary tables is not stored in the DB2 catalog, so this kind of table is not persistent and
can be used only by the application process that issued the DECLARE statement. Contrast with created temporary table. See also temporary table. default value. A predetermined value, attribute, or option that is assumed when no other is explicitly specified. deferred embedded SQL. SQL statements that are neither fully static nor fully dynamic. Like static statements, they are embedded within an application, but like dynamic statements, they are prepared during the execution of the application. deferred write. The process of asynchronously writing changed data pages to disk. degree of parallelism. The number of concurrently executed operations that are initiated to process a query. delete-connected. A table that is a dependent of table P or a dependent of a table to which delete operations from table P cascade. delete hole. The location on which a cursor is positioned when a row in a result table is refetched and the row no longer exists on the base table, because another cursor deleted the row between the time the cursor first included the row in the result table and the time the cursor tried to refetch it. delete rule. The rule that tells DB2 what to do to a dependent row when a parent row is deleted. For each relationship, the rule might be CASCADE, RESTRICT, SET NULL, or NO ACTION. delete trigger. A trigger that is defined with the triggering SQL operation DELETE. delimited identifier. A sequence of characters that are enclosed within double quotation marks ("). The sequence must consist of a letter followed by zero or more characters, each of which is a letter, digit, or the underscore character (_). delimiter token. A string constant, a delimited identifier, an operator symbol, or any of the special characters that are shown in DB2 syntax diagrams. denormalization. A key step in the task of building a physical relational database design. Denormalization is the intentional duplication of columns in multiple tables, and the consequence is increased data redundancy. Denormalization is sometimes necessary to minimize performance problems. Contrast with normalization. dependent. An object (row, table, or table space) that has at least one parent. The object is also said to be a dependent (row, table, or table space) of its parent. See also parent row, parent table, parent table space.
Glossary
1195
dependent row. A row that contains a foreign key that matches the value of a primary key in the parent row. dependent table. A table that is a dependent in at least one referential constraint. DES-based authenticator. An authenticator that is generated using the DES algorithm. descendent. An object that is a dependent of an object or is the dependent of a descendent of an object. descendent row. A row that is dependent on another row, or a row that is a descendent of a dependent row. descendent table. A table that is a dependent of another table, or a table that is a descendent of a dependent table. deterministic function. A user-defined function whose result is dependent on the values of the input arguments. That is, successive invocations with the same input values produce the same answer. Sometimes referred to as a not-variant function. Contrast this with an nondeterministic function (sometimes called a variant function), which might not always produce the same result for the same inputs. DFP. Data Facility Product (in z/OS). DFSMS. Data Facility Storage Management Subsystem (in z/OS). Also called Storage Management Subsystem (SMS).
distributed data. Data that resides on a DBMS other than the local system. distributed data facility (DDF). A set of DB2 components through which DB2 communicates with another relational database management system. Distributed Relational Database Architecture (DRDA ). A connection protocol for distributed relational database processing that is used by IBMs relational database products. DRDA includes protocols for communication between an application and a remote relational database management system, and for communication between relational database management systems. See also DRDA access. DL/I. Data Language/I. DNS. Domain name server.
| | | | |
document access definition (DAD). Used to define the indexing scheme for an XML column or the mapping scheme of an XML collection. It can be used to enable an XML Extender column of an XML collection, which is XML formatted. domain. The set of valid values for an attribute. domain name. The name by which TCP/IP applications refer to a TCP/IP host within a TCP/IP network. domain name server (DNS). A special TCP/IP network server that manages a distributed directory that is used to map TCP/IP host names to IP addresses. double-byte character large object (DBCLOB). A sequence of bytes representing double-byte characters where the size of the values can be up to 2 GB. In general, DBCLOB values are used whenever a double-byte character string might exceed the limits of the VARGRAPHIC type. double-byte character set (DBCS). A set of characters, which are used by national languages such as Japanese and Chinese, that have more symbols than can be represented by a single byte. Each character is 2 bytes in length. Contrast with single-byte character set and multibyte character set. double-precision floating point number. A 64-bit approximate representation of a real number. downstream. The set of nodes in the syncpoint tree that is connected to the local DBMS as a participant in the execution of a two-phase commit.
| DFSMSdss. The data set services (dss) component of | DFSMS (in z/OS). | DFSMShsm. The hierarchical storage manager (hsm) | component of DFSMS (in z/OS).
dimension. A data category such as time, products, or markets. The elements of a dimension are referred to as members. Dimensions offer a very concise, intuitive way of organizing and selecting data for retrieval, exploration, and analysis. See also dimension table. dimension table. The representation of a dimension in a star schema. Each row in a dimension table represents all of the attributes for a particular member of the dimension. See also dimension, star schema, and star join. directory. The DB2 system database that contains internal objects such as database descriptors and skeleton cursor tables.
# distinct predicate. In SQL, a predicate that ensures # that two row values are not equal, and that both row # values are not null.
distinct type. A user-defined data type that is internally represented as an existing type (its source type), but is considered to be a separate and incompatible type for semantic purposes.
1196
DRDA. Distributed Relational Database Architecture. DRDA access. An open method of accessing distributed data that you can use to can connect to another database server to execute packages that were previously bound at the server location. You use the SQL CONNECT statement or an SQL statement with a three-part name to identify the server. Contrast with private protocol access. DSN. (1) The default DB2 subsystem name. (2) The name of the TSO command processor of DB2. (3) The first three characters of DB2 module and macro names. duration. A number that represents an interval of time. See also date duration, labeled duration, and time duration.
enclave. In Language Environment , an independent collection of routines, one of which is designated as the main routine. An enclave is similar to a program or run unit. encoding scheme. A set of rules to represent character data (ASCII, EBCDIC, or Unicode). entity. A significant object of interest to an organization. enumerated list. A set of DB2 objects that are defined with a LISTDEF utility control statement in which pattern-matching characters (*, %, _ or ?) are not used. environment. A collection of names of logical and physical resources that are used to support the performance of a function. environment handle. In DB2 ODBC, the data object that contains global information regarding the state of the application. An environment handle must be allocated before a connection handle can be allocated. Only one environment handle can be allocated per application. EOM. End of memory. EOT. End of task. equijoin. A join operation in which the join-condition has the form expression = expression. error page range. A range of pages that are considered to be physically damaged. DB2 does not allow users to access any pages that fall within this range. escape character. The symbol that is used to enclose an SQL delimited identifier. The escape character is the double quotation mark ("), except in COBOL applications, where the user assigns the symbol, which is either a double quotation mark or an apostrophe ('). ESDS. Entry sequenced data set. ESMT. External subsystem module table (in IMS). EUR. IBM European Standards.
| | | |
dynamic cursor. A named control structure that an application program uses to change the size of the result table and the order of its rows after the cursor is opened. Contrast with static cursor. dynamic dump. A dump that is issued during the execution of a program, usually under the control of that program. dynamic SQL. SQL statements that are prepared and executed within an application program while the program is executing. In dynamic SQL, the SQL source is contained in host language variables rather than being coded into the application program. The SQL statement can change several times during the application programs execution.
| |
dynamic statement cache pool. A cache, located above the 2-GB storage line, that holds dynamic statements.
E
EA-enabled table space. A table space or index space that is enabled for extended addressability and that contains individual partitions (or pieces, for LOB table spaces) that are greater than 4 GB.
EB. See exabyte. EBCDIC. Extended binary coded decimal interchange code. An encoding scheme that is used to represent character data in the z/OS, VM, VSE, and iSeries environments. Contrast with ASCII and Unicode. e-business. The transformation of key business processes through the use of Internet technologies.
| exabyte. For processor, real and virtual storage | capacities and channel volume: | 1 152 921 504 606 846 976 bytes or 260.
exception table. A table that holds rows that violate referential constraints or check constraints that the CHECK DATA utility finds. exclusive lock. A lock that prevents concurrently executing application processes from reading or changing data. Contrast with share lock. executable statement. An SQL statement that can be embedded in an application program, dynamically prepared and executed, or issued interactively.
Glossary
| | |
EDM pool. A pool of main storage that is used for database descriptors, application plans, authorization cache, application packages. EID. Event identifier. embedded SQL. SQL statements that are coded within an application program. See static SQL.
1197
execution context. In SQLJ, a Java object that can be used to control the execution of SQL statements. exit routine. A user-written (or IBM-provided default) program that receives control from DB2 to perform specific functions. Exit routines run as extensions of DB2. expanding conversion. A process that occurs when the length of a converted string is greater than that of the source string. For example, this process occurs when an ASCII mixed-data string that contains DBCS characters is converted to an EBCDIC mixed-data string; the converted string is longer because of the addition of shift codes. explicit hierarchical locking. Locking that is used to make the parent-child relationship between resources known to IRLM. This kind of locking avoids global locking overhead when no inter-DB2 interest exists on a resource. exposed name. A correlation name or a table or view name for which a correlation name is not specified. Names that are specified in a FROM clause are exposed or non-exposed. expression. An operand or a collection of operators and operands that yields a single value. extended recovery facility (XRF). A facility that minimizes the effect of failures in z/OS, VTAM , the host processor, or high-availability applications during sessions between high-availability applications and designated terminals. This facility provides an alternative subsystem to take over sessions from the failing subsystem. Extensible Markup Language (XML). A standard metalanguage for defining markup languages that is a subset of Standardized General Markup Language (SGML). The less complex nature of XML makes it easier to write applications that handle document types, to author and manage structured information, and to transmit and share structured information across diverse computing environments. external function. A function for which the body is written in a programming language that takes scalar argument values and produces a scalar result for each invocation. Contrast with sourced function, built-in function, and SQL function. external procedure. A user-written application program that can be invoked with the SQL CALL statement, which is written in a programming language. Contrast with SQL procedure. external routine. A user-defined function or stored procedure that is based on code that is written in an external programming language.
external subsystem module table (ESMT). In IMS, the table that specifies which attachment modules must be loaded.
F
failed member state. A state of a member of a data sharing group. When a member fails, the XCF permanently records the failed member state. This state usually means that the members task, address space, or z/OS system terminated before the state changed from active to quiesced. fallback. The process of returning to a previous release of DB2 after attempting or completing migration to a current release. false global lock contention. A contention indication from the coupling facility when multiple lock names are hashed to the same indicator and when no real contention exists. fan set. A direct physical access path to data, which is provided by an index, hash, or link; a fan set is the means by which the data manager supports the ordering of data. federated database. The combination of a DB2 Universal Database server (in Linux, UNIX, and Windows environments) and multiple data sources to which the server sends queries. In a federated database system, a client application can use a single SQL statement to join data that is distributed across multiple database management systems and can view the data as if it were local. fetch orientation. The specification of the desired placement of the cursor as part of a FETCH statement (for example, BEFORE, AFTER, NEXT, PRIOR, CURRENT, FIRST, LAST, ABSOLUTE, and RELATIVE). field procedure. A user-written exit routine that is designed to receive a single value and transform (encode or decode) it in any way the user can specify. filter factor. A number between zero and one that estimates the proportion of rows in a table for which a predicate is true. fixed-length string. A character or graphic string whose length is specified and cannot be changed. Contrast with varying-length string. FlashCopy. A function on the IBM Enterprise Storage Server that can create a point-in-time copy of data while an application is running. foreign key. A column or set of columns in a dependent table of a constraint relationship. The key must have the same number of columns, with the same descriptions, as the primary key of the parent table.
1198
Each foreign key value must either match a parent key value in the related parent table or be null.
forest. An ordered set of subtrees of XML nodes. forget. In a two-phase commit operation, (1) the vote that is sent to the prepare phase when the participant has not modified any data. The forget vote allows a participant to release locks and forget about the logical unit of work. This is also referred to as the read-only vote. (2) The response to the committed request in the second phase of the operation. forward log recovery. The third phase of restart processing during which DB2 processes the log in a forward direction to apply all REDO log records. free space. The total amount of unused space in a page; that is, the space that is not used to store records or control information is free space. full outer join. The result of a join operation that includes the matched rows of both tables that are being joined and preserves the unmatched rows of both tables. See also join. fullselect. A subselect, a values-clause, or a number of both that are combined by set operators. Fullselect specifies a result table. If UNION is not used, the result of the fullselect is the result of the specified subselect.
list of the applicable schema names (called the SQL path) to make the selection. This process is sometimes called function selection. function selection. See function resolution. function signature. The logical concatenation of a fully qualified function name with the data types of all of its parameters.
G
GB. Gigabyte (1 073 741 824 bytes). GBP. Group buffer pool. GBP-dependent. The status of a page set or page set partition that is dependent on the group buffer pool. Either read/write interest is active among DB2 subsystems for this page set, or the page set has changed pages in the group buffer pool that have not yet been cast out to disk. generalized trace facility (GTF). A z/OS service program that records significant system events such as I/O interrupts, SVC interrupts, program interrupts, or external interrupts. generic resource name. A name that VTAM uses to represent several application programs that provide the same function in order to handle session distribution and balancing in a Sysplex environment. getpage. An operation in which DB2 accesses a data page. global lock. A lock that provides concurrency control within and among DB2 subsystems. The scope of the lock is across all DB2 subsystems of a data sharing group. global lock contention. Conflicts on locking requests between different DB2 members of a data sharing group when those members are trying to serialize shared resources. governor. See resource limit facility. graphic string. A sequence of DBCS characters. gross lock. The shared, update, or exclusive mode locks on a table, partition, or table space. group buffer pool (GBP). A coupling facility cache structure that is used by a data sharing group to cache data and to ensure that the data is consistent for all members. group buffer pool duplexing. The ability to write data to two instances of a group buffer pool structure: a primary group buffer pool and a secondary group buffer
| | | # # # # # # # # #
fully escaped mapping. A mapping from an SQL identifier to an XML name when the SQL identifier is a column name. function. A mapping, which is embodied as a program (the function body) that is invocable by means of zero or more input values (arguments) to a single value (the result). See also aggregate function and scalar function. Functions can be user-defined, built-in, or generated by DB2. (See also built-in function, cast function, external function, sourced function, SQL function, and user-defined function.) function definer. The authorization ID of the owner of the schema of the function that is specified in the CREATE FUNCTION statement. function implementer. The authorization ID of the owner of the function program and function package. function package. A package that results from binding the DBRM for a function program. function package owner. The authorization ID of the user who binds the function programs DBRM into a function package. function resolution. The process, internal to the DBMS, by which a function invocation is bound to a particular function instance. This process uses the function name, the data types of the arguments, and a
Glossary
1199
pool. z/OS publications refer to these instances as the "old" (for primary) and "new" (for secondary) structures. group level. The release level of a data sharing group, which is established when the first member migrates to a new release. group name. The z/OS XCF identifier for a data sharing group. group restart. A restart of at least one member of a data sharing group after the loss of either locks or the shared communications area. GTF. Generalized trace facility.
host structure. In an application program, a structure that is referenced by embedded SQL statements. host variable. In an application program, an application variable that is referenced by embedded SQL statements.
| | | |
host variable array. An array of elements, each of which corresponds to a value for a column. The dimension of the array determines the maximum number of rows for which the array can be used. HSM. Hierarchical storage manager. HTML. Hypertext Markup Language, a standard method for presenting Web data to users. HTTP. Hypertext Transfer Protocol, a communication protocol that the Web uses.
H
handle. In DB2 ODBC, a variable that refers to a data structure and associated resources. See also statement handle, connection handle, and environment handle. help panel. A screen of information that presents tutorial text to assist a user at the workstation or terminal. heuristic damage. The inconsistency in data between one or more participants that results when a heuristic decision to resolve an indoubt LUW at one or more participants differs from the decision that is recorded at the coordinator. heuristic decision. A decision that forces indoubt resolution at a participant by means other than automatic resynchronization between coordinator and participant.
I
ICF. Integrated catalog facility. IDCAMS. An IBM program that is used to process access method services commands. It can be invoked as a job or jobstep, from a TSO terminal, or from within a users application program. IDCAMS LISTCAT. A facility for obtaining information that is contained in the access method services catalog. identify. A request that an attachment service program in an address space that is separate from DB2 issues thorough the z/OS subsystem interface to inform DB2 of its existence and to initiate the process of becoming connected to DB2. identity column. A column that provides a way for DB2 to automatically generate a numeric value for each row. The generated values are unique if cycling is not used. Identity columns are defined with the AS IDENTITY clause. Uniqueness of values can be ensured by defining a unique index that contains only the identity column. A table can have no more than one identity column. IFCID. Instrumentation facility component identifier. IFI. Instrumentation facility interface. IFI call. An invocation of the instrumentation facility interface (IFI) by means of one of its defined functions. IFP. IMS Fast Path. image copy. An exact reproduction of all or part of a table space. DB2 provides utility programs to make full image copies (to copy the entire table space) or incremental image copies (to copy only those pages that have been modified since the last image copy).
| | | |
hole. A row of the result table that cannot be accessed because of a delete or an update that has been performed on the row. See also delete hole and update hole. home address space. The area of storage that z/OS currently recognizes as dispatched. host. The set of programs and resources that are available on a given TCP/IP instance. host expression. A Java variable or expression that is referenced by SQL clauses in an SQLJ application program. host identifier. A name that is declared in the host program. host language. A programming language in which you can embed SQL statements. host program. An application program that is written in a host language and that contains embedded SQL statements.
1200
implied forget. In the presumed-abort protocol, an implied response of forget to the second-phase committed request from the coordinator. The response is implied when the participant responds to any subsequent request from the coordinator. IMS. Information Management System. IMS attachment facility. A DB2 subcomponent that uses z/OS subsystem interface (SSI) protocols and cross-memory linkage to process requests from IMS to DB2 and to coordinate resource commitment. IMS DB. Information Management System Database. IMS TM. Information Management System Transaction Manager. in-abort. A status of a unit of recovery. If DB2 fails after a unit of recovery begins to be rolled back, but before the process is completed, DB2 continues to back out the changes during restart. in-commit. A status of a unit of recovery. If DB2 fails after beginning its phase 2 commit processing, it "knows," when restarted, that changes made to data are consistent. Such units of recovery are termed in-commit. independent. An object (row, table, or table space) that is neither a parent nor a dependent of another object. index. A set of pointers that are logically ordered by the values of a key. Indexes can provide faster access to data and can enforce uniqueness on the rows in a table.
to be committed or rolled back. At emergency restart, if DB2 lacks the information it needs to make this decision, the status of the unit of recovery is indoubt until DB2 obtains this information from the coordinator. More than one unit of recovery can be indoubt at restart. indoubt resolution. The process of resolving the status of an indoubt logical unit of work to either the committed or the rollback state. inflight. A status of a unit of recovery. If DB2 fails before its unit of recovery completes phase 1 of the commit process, it merely backs out the updates of its unit of recovery at restart. These units of recovery are termed inflight. inheritance. The passing downstream of class resources or attributes from a parent class in the class hierarchy to a child class. initialization file. For DB2 ODBC applications, a file containing values that can be set to adjust the performance of the database manager. inline copy. A copy that is produced by the LOAD or REORG utility. The data set that the inline copy produces is logically equivalent to a full image copy that is produced by running the COPY utility with read-only access (SHRLEVEL REFERENCE). inner join. The result of a join operation that includes only the matched rows of both tables that are being joined. See also join. inoperative package. A package that cannot be used because one or more user-defined functions or procedures that the package depends on were dropped. Such a package must be explicitly rebound. Contrast with invalid package.
| | | | | |
index-controlled partitioning. A type of partitioning in which partition boundaries for a partitioned table are controlled by values that are specified on the CREATE INDEX statement. Partition limits are saved in the LIMITKEY column of the SYSIBM.SYSINDEXPART catalog table. index key. The set of columns in a table that is used to determine the order of index entries. index partition. A VSAM data set that is contained within a partitioning index space. index space. A page set that is used to store the entries of one index. indicator column. A 4-byte value that is stored in a base table in place of a LOB column. indicator variable. A variable that is used to represent the null value in an application program. If the value for the selected column is null, a negative value is placed in the indicator variable. indoubt. A status of a unit of recovery. If DB2 fails after it has finished its phase 1 commit processing and before it has started phase 2, only the commit coordinator knows if an individual unit of recovery is
| | | |
insensitive cursor. A cursor that is not sensitive to inserts, updates, or deletes that are made to the underlying rows of a result table after the result table has been materialized. insert trigger. A trigger that is defined with the triggering SQL operation INSERT. install. The process of preparing a DB2 subsystem to operate as a z/OS subsystem. installation verification scenario. A sequence of operations that exercises the main DB2 functions and tests whether DB2 was correctly installed. instrumentation facility component identifier (IFCID). A value that names and identifies a trace record of an event that can be traced. As a parameter on the START TRACE and MODIFY TRACE commands, it specifies that the corresponding event is to be traced.
Glossary
1201
instrumentation facility interface (IFI). A programming interface that enables programs to obtain online trace data about DB2, to submit DB2 commands, and to pass data to DB2. Interactive System Productivity Facility (ISPF). An IBM licensed program that provides interactive dialog services in a z/OS environment. inter-DB2 R/W interest. A property of data in a table space, index, or partition that has been opened by more than one member of a data sharing group and that has been opened for writing by at least one of those members. intermediate database server. The target of a request from a local application or a remote application requester that is forwarded to another database server. In the DB2 environment, the remote request is forwarded transparently to another database server if the object that is referenced by a three-part name does not reference the local location. internationalization. The support for an encoding scheme that is able to represent the code points of characters from many different geographies and languages. To support all geographies, the Unicode standard requires more than 1 byte to represent a single character. See also Unicode. internal resource lock manager (IRLM). A z/OS subsystem that DB2 uses to control communication and database locking.
ISPF. Interactive System Productivity Facility. ISPF/PDF. Interactive System Productivity Facility/Program Development Facility. iterator. In SQLJ, an object that contains the result set of a query. An iterator is equivalent to a cursor in other host languages. iterator declaration clause. In SQLJ, a statement that generates an iterator declaration class. An iterator is an object of an iterator declaration class.
J
| Japanese Industrial Standard. An encoding scheme | that is used to process Japanese characters. | JAR. Java Archive.
Java Archive (JAR). A file format that is used for aggregating many files into a single file. JCL. Job control language. JDBC. A Sun Microsystems database application programming interface (API) for Java that allows programs to access database management systems by using callable SQL. JDBC does not require the use of an SQL preprocessor. In addition, JDBC provides an architecture that lets users add modules called database drivers, which link the application to their choice of database management systems at run time. JES. Job Entry Subsystem. JIS. Japanese Industrial Standard. job control language (JCL). A control language that is used to identify a job to an operating system and to describe the jobs requirements. Job Entry Subsystem (JES). An IBM licensed program that receives jobs into the system and processes all output data that is produced by the jobs. join. A relational operation that allows retrieval of data from two or more tables based on matching column values. See also equijoin, full outer join, inner join, left outer join, outer join, and right outer join.
| | | | |
International Organization for Standardization. An international body charged with creating standards to facilitate the exchange of goods and services as well as cooperation in intellectual, scientific, technological, and economic activity. invalid package. A package that depends on an object (other than a user-defined function) that is dropped. Such a package is implicitly rebound on invocation. Contrast with inoperative package. invariant character set. (1) A character set, such as the syntactic character set, whose code point assignments do not change from code page to code page. (2) A minimum set of characters that is available as part of all character sets. IP address. A 4-byte value that uniquely identifies a TCP/IP host. IRLM. Internal resource lock manager. ISO. International Organization for Standardization. isolation level. The degree to which a unit of work is isolated from the updating operations of other units of work. See also cursor stability, read stability, repeatable read, and uncommitted read.
K
KB. Kilobyte (1024 bytes). Kerberos. A network authentication protocol that is designed to provide strong authentication for client/server applications by using secret-key cryptography. Kerberos ticket. A transparent application mechanism that transmits the identity of an initiating principal to its target. A simple ticket contains the principals
1202
identity, a session key, a timestamp, and other information, which is sealed using the targets secret key. key. A column or an ordered collection of columns that is identified in the description of a table, index, or referential constraint. The same column can be part of more than one key. key-sequenced data set (KSDS). A VSAM file or data set whose records are loaded in key sequence and controlled by an index. keyword. In SQL, a name that identifies an option that is used in an SQL statement. KSDS. Key-sequenced data set.
modules by resolving cross references among the modules and, if necessary, adjusting addresses. link-edit. The action of creating a loadable computer program using a linkage editor. list. A type of object, which DB2 utilities can process, that identifies multiple table spaces, multiple index spaces, or both. A list is defined with the LISTDEF utility control statement. list structure. A coupling facility structure that lets data be shared and manipulated as elements of a queue. LLE. Load list element. L-lock. Logical lock.
L
labeled duration. A number that represents a duration of years, months, days, hours, minutes, seconds, or microseconds. large object (LOB). A sequence of bytes representing bit data, single-byte characters, double-byte characters, or a mixture of single- and double-byte characters. A LOB can be up to 2 GB1 byte in length. See also BLOB, CLOB, and DBCLOB. last agent optimization. An optimized commit flow for either presumed-nothing or presumed-abort protocols in which the last agent, or final participant, becomes the commit coordinator. This flow saves at least one message. latch. A DB2 internal mechanism for controlling concurrent events or the use of system resources. LCID. Log control interval definition. LDS. Linear data set. leaf page. A page that contains pairs of keys and RIDs and that points to actual data. Contrast with nonleaf page. left outer join. The result of a join operation that includes the matched rows of both tables that are being joined, and that preserves the unmatched rows of the first table. See also join. limit key. The highest value of the index key for a partition. linear data set (LDS). A VSAM data set that contains data but no control information. A linear data set can be accessed as a byte-addressable string in virtual storage. linkage editor. A computer program for creating load modules from one or more object modules or load
| load list element. A z/OS control block that controls | the loading and deleting of a particular load module | based on entry point names.
load module. A program unit that is suitable for loading into main storage for execution. The output of a linkage editor. LOB. Large object. LOB locator. A mechanism that allows an application program to manipulate a large object value in the database system. A LOB locator is a fullword integer value that represents a single LOB value. An application program retrieves a LOB locator into a host variable and can then apply SQL operations to the associated LOB value using the locator. LOB lock. A lock on a LOB value. LOB table space. A table space in an auxiliary table that contains all the data for a particular LOB column in the related base table. local. A way of referring to any object that the local DB2 subsystem maintains. A local table, for example, is a table that is maintained by the local DB2 subsystem. Contrast with remote. locale. The definition of a subset of a users environment that combines a CCSID and characters that are defined for a specific language and country. local lock. A lock that provides intra-DB2 concurrency control, but not inter-DB2 concurrency control; that is, its scope is a single DB2. local subsystem. The unique relational DBMS to which the user or application program is directly connected (in the case of DB2, by one of the DB2 attachment facilities).
| location. The unique name of a database server. An | application uses the location name to access a DB2
Glossary
1203
| database server. A database alias can be used to | override the location name when accessing a remote | server. | location alias. Another name by which a database | server identifies itself in the network. Applications can | use this name to access a DB2 database server.
lock. A means of controlling concurrent events or access to data. DB2 locking is performed by the IRLM. lock duration. The interval over which a DB2 lock is held. lock escalation. The promotion of a lock from a row, page, or LOB lock to a table space lock because the number of page locks that are concurrently held on a given resource exceeds a preset limit. locking. The process by which the integrity of data is ensured. Locking prevents concurrent users from accessing inconsistent data. lock mode. A representation for the type of access that concurrently running programs can have to a resource that a DB2 lock is holding. lock object. The resource that is controlled by a DB2 lock. lock promotion. The process of changing the size or mode of a DB2 lock to a higher, more restrictive level. lock size. The amount of data that is controlled by a DB2 lock on table data; the value can be a row, a page, a LOB, a partition, a table, or a table space. lock structure. A coupling facility data structure that is composed of a series of lock entries to support shared and exclusive locking for logical resources. log. A collection of records that describe the events that occur during DB2 execution and that indicate their sequence. The information thus recorded is used for recovery in the event of a failure during DB2 execution.
logical lock (L-lock). The lock type that transactions use to control intra- and inter-DB2 data concurrency between transactions. Contrast with physical lock (P-lock). logically complete. A state in which the concurrent copy process is finished with the initialization of the target objects that are being copied. The target objects are available for update. logical page list (LPL). A list of pages that are in error and that cannot be referenced by applications until the pages are recovered. The page is in logical error because the actual media (coupling facility or disk) might not contain any errors. Usually a connection to the media has been lost. logical partition. A set of key or RID pairs in a nonpartitioning index that are associated with a particular partition. logical recovery pending (LRECP). The state in which the data and the index keys that reference the data are inconsistent. logical unit (LU). An access point through which an application program accesses the SNA network in order to communicate with another application program. logical unit of work (LUW). The processing that a program performs between synchronization points. logical unit of work identifier (LUWID). A name that uniquely identifies a thread within a network. This name consists of a fully-qualified LU network name, an LUW instance number, and an LUW sequence number. log initialization. The first phase of restart processing during which DB2 attempts to locate the current end of the log. log record header (LRH). A prefix, in every logical record, that contains control information. log record sequence number (LRSN). A unique identifier for a log record that is associated with a data sharing member. DB2 uses the LRSN for recovery in the data sharing environment. log truncation. A process by which an explicit starting RBA is established. This RBA is the point at which the next byte of log data is to be written. LPL. Logical page list. LRECP. Logical recovery pending. LRH. Log record header. LRSN. Log record sequence number. LU. Logical unit.
| log control interval definition. A suffix of the | physical log record that tells how record segments are | placed in the physical control interval.
logical claim. A claim on a logical partition of a nonpartitioning index. logical data modeling. The process of documenting the comprehensive business information requirements in an accurate and consistent format. Data modeling is the first task of designing a database. logical drain. A drain on a logical partition of a nonpartitioning index. logical index partition. The set of all keys that reference the same data partition.
1204
LU name. Logical unit name, which is the name by which VTAM refers to a node in a network. Contrast with location name. LUW. Logical unit of work. LUWID. Logical unit of work identifier.
MODEENT. A VTAM macro instruction that associates a logon mode name with a set of parameters representing session protocols. A set of MODEENT macro instructions defines a logon mode table. modeling database. A DB2 database that you create on your workstation that you use to model a DB2 UDB for z/OS subsystem, which can then be evaluated by the Index Advisor. mode name. A VTAM name for the collection of physical and logical characteristics and attributes of a session. modify locks. An L-lock or P-lock with a MODIFY attribute. A list of these active locks is kept at all times in the coupling facility lock structure. If the requesting DB2 subsystem fails, that DB2 subsystems modify locks are converted to retained locks. MPP. Message processing program (in IMS). MTO. Master terminal operator. multibyte character set (MBCS). A character set that represents single characters with more than a single byte. Contrast with single-byte character set and double-byte character set. See also Unicode. multidimensional analysis. The process of assessing and evaluating an enterprise on more than one level. Multiple Virtual Storage. An element of the z/OS operating system. This element is also called the Base Control Program (BCP). multisite update. Distributed relational database processing in which data is updated in more than one location within a single unit of work. multithreading. Multiple TCBs that are executing one copy of DB2 ODBC code concurrently (sharing a processor) or in parallel (on separate central processors). must-complete. A state during DB2 processing in which the entire operation must be completed to maintain data integrity. mutex. Pthread mutual exclusion; a lock. A Pthread mutex variable is used as a locking mechanism to allow serialization of critical sections of code by temporarily blocking the execution of all but one thread.
M
mapping table. A table that the REORG utility uses to map the associations of the RIDs of data records in the original copy and in the shadow copy. This table is created by the user. mass delete. The deletion of all rows of a table. master terminal. The IMS logical terminal that has complete control of IMS resources during online operations. master terminal operator (MTO). See master terminal. materialize. (1) The process of putting rows from a view or nested table expression into a work file for additional processing by a query. (2) The placement of a LOB value into contiguous storage. Because LOB values can be very large, DB2 avoids materializing LOB data until doing so becomes absolutely necessary.
| | |
materialized query table. A table that is used to contain information that is derived and can be summarized from one or more source tables. MB. Megabyte (1 048 576 bytes). MBCS. Multibyte character set. UTF-8 is an example of an MBCS. Characters in UTF-8 can range from 1 to 4 bytes in DB2. member name. The z/OS XCF identifier for a particular DB2 subsystem in a data sharing group. menu. A displayed list of available functions for selection by the operator. A menu is sometimes called a menu panel.
| |
metalanguage. A language that is used to create other specialized languages. migration. The process of converting a subsystem with a previous release of DB2 to an updated or current release. In this process, you can acquire the functions of the updated or current release without losing the data that you created on the previous release. mixed data string. A character string that can contain both single-byte and double-byte characters. MLPA. Modified link pack area.
N
negotiable lock. A lock whose mode can be downgraded, by agreement among contending users, to be compatible to all. A physical lock is an example of a negotiable lock.
Glossary
1205
nested table expression. A fullselect in a FROM clause (surrounded by parentheses). network identifier (NID). The network ID that is assigned by IMS or CICS, or if the connection type is RRSAF, the RRS unit of recovery ID (URID). NID. Network identifier. nonleaf page. A page that contains keys and page numbers of other pages in the index (either leaf or nonleaf pages). Nonleaf pages never point to actual data.
For Unicode UCS-2 (wide) strings, the null terminator is a double-byte value (X'0000').
O
OASN (origin application schedule number). In IMS, a 4-byte number that is assigned sequentially to each IMS schedule since the last cold start of IMS. The OASN is used as an identifier for a unit of work. In an 8-byte format, the first 4 bytes contain the schedule number and the last 4 bytes contain the number of IMS sync points (commit points) during the current schedule. The OASN is part of the NID for an IMS connection. ODBC. Open Database Connectivity. ODBC driver. A dynamically-linked library (DLL) that implements ODBC function calls and interacts with a data source. OBID. Data object identifier.
| nonpartitioned index. An index that is not physically | partitioned. Both partitioning indexes and secondary | indexes can be nonpartitioned.
nonscrollable cursor. A cursor that can be moved only in a forward direction. Nonscrollable cursors are sometimes called forward-only cursors or serial cursors. normalization. A key step in the task of building a logical relational database design. Normalization helps you avoid redundancies and inconsistencies in your data. An entity is normalized if it meets a set of constraints for a particular normal form (first normal form, second normal form, and so on). Contrast with denormalization. nondeterministic function. A user-defined function whose result is not solely dependent on the values of the input arguments. That is, successive invocations with the same argument values can produce a different answer. this type of function is sometimes called a variant function. Contrast this with a deterministic function (sometimes called a not-variant function), which always produces the same result for the same inputs. not-variant function. See deterministic function.
Open Database Connectivity (ODBC). A Microsoft database application programming interface (API) for C that allows access to database management systems by using callable SQL. ODBC does not require the use of an SQL preprocessor. In addition, ODBC provides an architecture that lets users add modules called database drivers, which link the application to their choice of database management systems at run time. This means that applications no longer need to be directly linked to the modules of all the database management systems that are supported. ordinary identifier. An uppercase letter followed by zero or more characters, each of which is an uppercase letter, a digit, or the underscore character. An ordinary identifier must not be a reserved word. ordinary token. A numeric constant, an ordinary identifier, a host identifier, or a keyword. originating task. In a parallel group, the primary agent that receives data from other execution units (referred to as parallel tasks) that are executing portions of the query in parallel. OS/390. Operating System/390. outer join. The result of a join operation that includes the matched rows of both tables that are being joined and preserves some or all of the unmatched rows of the tables that are being joined. See also join. overloaded function. A function name for which multiple function instances exist.
1206
P
package. An object containing a set of SQL statements that have been statically bound and that is available for processing. A package is sometimes also called an application package. package list. An ordered list of package names that may be used to extend an application plan. package name. The name of an object that is created by a BIND PACKAGE or REBIND PACKAGE command. The object is a bound version of a database request module (DBRM). The name consists of a location name, a collection ID, a package ID, and a version ID. page. A unit of storage within a table space (4 KB, 8 KB, 16 KB, or 32 KB) or index space (4 KB). In a table space, a page contains one or more rows of a table. In a LOB table space, a LOB value can span more than one page, but no more than one LOB value is stored on a page. page set. Another way to refer to a table space or index space. Each page set consists of a collection of VSAM data sets. page set recovery pending (PSRCP). A restrictive state of an index space. In this case, the entire page set must be recovered. Recovery of a logical part is prohibited. panel. A predefined display image that defines the locations and characteristics of display fields on a display surface (for example, a menu panel). parallel complex. A cluster of machines that work together to handle multiple transactions and applications. parallel group. A set of consecutive operations that execute in parallel and that have the same number of parallel tasks. parallel I/O processing. A form of I/O processing in which DB2 initiates multiple concurrent requests for a single user query and performs I/O processing concurrently (in parallel) on multiple data partitions. parallelism assistant. In Sysplex query parallelism, a DB2 subsystem that helps to process parts of a parallel query that originates on another DB2 subsystem in the data sharing group. parallelism coordinator. In Sysplex query parallelism, the DB2 subsystem from which the parallel query originates. Parallel Sysplex. A set of z/OS systems that communicate and cooperate with each other through certain multisystem hardware components and software services to process customer workloads.
parallel task. The execution unit that is dynamically created to process a query in parallel. A parallel task is implemented by a z/OS service request block. parameter marker. A question mark (?) that appears in a statement string of a dynamic SQL statement. The question mark can appear where a host variable could appear if the statement string were a static SQL statement.
| parameter-name. An SQL identifier that designates a | parameter in an SQL procedure or an SQL function.
parent key. A primary key or unique key in the parent table of a referential constraint. The values of a parent key determine the valid values of the foreign key in the referential constraint.
| | | | |
parent lock. For explicit hierarchical locking, a lock that is held on a resource that might have child locks that are lower in the hierarchy. A parent lock is usually the table space lock or the partition intent lock. See also child lock. parent row. A row whose primary key value is the foreign key value of a dependent row. parent table. A table whose primary key is referenced by the foreign key of a dependent table. parent table space. A table space that contains a parent table. A table space containing a dependent of that table is a dependent table space. participant. An entity other than the commit coordinator that takes part in the commit process. The term participant is synonymous with agent in SNA. partition. A portion of a page set. Each partition corresponds to a single, independently extendable data set. Partitions can be extended to a maximum size of 1, 2, or 4 GB, depending on the number of partitions in the partitioned page set. All partitions of a given page set have the same maximum size. partitioned data set (PDS). A data set in disk storage that is divided into partitions, which are called members. Each partition can contain a program, part of a program, or data. The term partitioned data set is synonymous with program library.
| partitioned index. An index that is physically | partitioned. Both partitioning indexes and secondary | indexes can be partitioned.
partitioned page set. A partitioned table space or an index space. Header pages, space map pages, data pages, and index pages reference data only within the scope of the partition. partitioned table space. A table space that is subdivided into parts (based on index key range), each of which can be processed independently by utilities.
Glossary
1207
| partitioning index. An index in which the leftmost | columns are the partitioning columns of the table. The | index can be partitioned or nonpartitioned. | | | |
partition pruning. The removal from consideration of inapplicable partitions through setting up predicates in a query on a partitioned table to access only certain partitions to satisfy the query. partner logical unit. An access point in the SNA network that is connected to the local DB2 subsystem by way of a VTAM conversation. path. See SQL path. PCT. Program control table (in CICS). PDS. Partitioned data set. piece. A data set of a nonpartitioned page set. physical claim. A claim on an entire nonpartitioning index. physical consistency. The state of a page that is not in a partially changed state. physical drain. A drain on an entire nonpartitioning index. physical lock (P-lock). A type of lock that DB2 acquires to provide consistency of data that is cached in different DB2 subsystems. Physical locks are used only in data sharing environments. Contrast with logical lock (L-lock). physical lock contention. Conflicting states of the requesters for a physical lock. See also negotiable lock. physically complete. The state in which the concurrent copy process is completed and the output data set has been created. plan. See application plan. plan allocation. The process of allocating DB2 resources to a plan in preparation for execution. plan member. The bound copy of a DBRM that is identified in the member clause. plan name. The name of an application plan. plan segmentation. The dividing of each plan into sections. When a section is needed, it is independently brought into the EDM pool. P-lock. Physical lock. PLT. Program list table (in CICS). point of consistency. A time when all recoverable data that an application accesses is consistent with other data. The term point of consistency is synonymous with sync point or commit point.
policy. See CFRM policy. Portable Operating System Interface (POSIX). The IEEE operating system interface standard, which defines the Pthread standard of threading. See also Pthread. POSIX. Portable Operating System Interface. postponed abort UR. A unit of recovery that was inflight or in-abort, was interrupted by system failure or cancellation, and did not complete backout during restart. PPT. (1) Processing program table (in CICS). (2) Program properties table (in z/OS). precision. In SQL, the total number of digits in a decimal number (called the size in the C language). In the C language, the number of digits to the right of the decimal point (called the scale in SQL). The DB2 library uses the SQL terms. precompilation. A processing of application programs containing SQL statements that takes place before compilation. SQL statements are replaced with statements that are recognized by the host language compiler. Output from this precompilation includes source code that can be submitted to the compiler and the database request module (DBRM) that is input to the bind process. predicate. An element of a search condition that expresses or implies a comparison operation. prefix. A code at the beginning of a message or record. preformat. The process of preparing a VSAM ESDS for DB2 use, by writing specific data patterns. prepare. The first phase of a two-phase commit process in which all participants are requested to prepare for commit. prepared SQL statement. A named object that is the executable form of an SQL statement that has been processed by the PREPARE statement. presumed-abort. An optimization of the presumed-nothing two-phase commit protocol that reduces the number of recovery log records, the duration of state maintenance, and the number of messages between coordinator and participant. The optimization also modifies the indoubt resolution responsibility. presumed-nothing. The standard two-phase commit protocol that defines coordinator and participant responsibilities, relative to logical unit of work states, recovery logging, and indoubt resolution. primary authorization ID. The authorization ID that is used to identify the application process to DB2.
1208
primary group buffer pool. For a duplexed group buffer pool, the structure that is used to maintain the coherency of cached data. This structure is used for page registration and cross-invalidation. The z/OS equivalent is old structure. Compare with secondary group buffer pool. primary index. An index that enforces the uniqueness of a primary key. primary key. In a relational database, a unique, nonnull key that is part of the definition of a table. A table cannot be defined as a parent unless it has a unique key or primary key. principal. An entity that can communicate securely with another entity. In Kerberos, principals are represented as entries in the Kerberos registry database and include users, servers, computers, and others. principal name. The name by which a principal is known to the DCE security services. private connection. A communications connection that is specific to DB2. private protocol access. A method of accessing distributed data by which you can direct a query to another DB2 system. Contrast with DRDA access. private protocol connection. A DB2 private connection of the application process. See also private connection. privilege. The capability of performing a specific function, sometimes on a specific object. The types of privileges are: explicit privileges, which have names and are held as the result of SQL GRANT and REVOKE statements. For example, the SELECT privilege. implicit privileges, which accompany the ownership of an object, such as the privilege to drop a synonym that one owns, or the holding of an authority, such as the privilege of SYSADM authority to terminate any utility job. privilege set. For the installation SYSADM ID, the set of all possible privileges. For any other authorization ID, the set of all privileges that are recorded for that ID in the DB2 catalog. process. In DB2, the unit to which DB2 allocates resources and locks. Sometimes called an application process, a process involves the execution of one or more programs. The execution of an SQL statement is always associated with some process. The means of initiating and terminating a process are dependent on the environment. program. A single, compilable collection of executable statements in a programming language. program temporary fix (PTF). A solution or bypass of a problem that is diagnosed as a result of a defect in a
current unaltered release of a licensed program. An authorized program analysis report (APAR) fix is corrective service for an existing problem. A PTF is preventive service for problems that might be encountered by other users of the product. A PTF is temporary, because a permanent fix is usually not incorporated into the product until its next release. protected conversation. A VTAM conversation that supports two-phase commit flows. PSRCP. Page set recovery pending. PTF. Program temporary fix. Pthread. The POSIX threading standard model for splitting an application into subtasks. The Pthread standard includes functions for creating threads, terminating threads, synchronizing threads through locking, and other thread control facilities.
Q
QMF. Query Management Facility. QSAM. Queued sequential access method. query. A component of certain SQL statements that specifies a result table. query block. The part of a query that is represented by one of the FROM clauses. Each FROM clause can have multiple query blocks, depending on DB2s internal processing of the query. query CP parallelism. Parallel execution of a single query, which is accomplished by using multiple tasks. See also Sysplex query parallelism. query I/O parallelism. Parallel access of data, which is accomplished by triggering multiple I/O requests within a single query. queued sequential access method (QSAM). An extended version of the basic sequential access method (BSAM). When this method is used, a queue of data blocks is formed. Input data blocks await processing, and output data blocks await transfer to auxiliary storage or to an output device. quiesce point. A point at which data is consistent as a result of running the DB2 QUIESCE utility. quiesced member state. A state of a member of a data sharing group. An active member becomes quiesced when a STOP DB2 command takes effect without a failure. If the members task, address space, or z/OS system fails before the command takes effect, the member state is failed.
Glossary
1209
R
| RACF. Resource Access Control Facility, which is a | component of the z/OS Security Server.
RAMAC . IBM family of enterprise disk storage system products. RBA. Relative byte address. RCT. Resource control table (in CICS attachment facility). RDB. Relational database. RDBMS. Relational database management system. RDBNAM. Relational database name. RDF. Record definition field. read stability (RS). An isolation level that is similar to repeatable read but does not completely isolate an application process from all other concurrently executing application processes. Under level RS, an application that issues the same query more than once might read additional rows that were inserted and committed by a concurrently executing application process. rebind. The creation of a new application plan for an application program that has been bound previously. If, for example, you have added an index for a table that your application accesses, you must rebind the application in order to take advantage of that index. rebuild. The process of reallocating a coupling facility structure. For the shared communications area (SCA) and lock structure, the structure is repopulated; for the group buffer pool, changed pages are usually cast out to disk, and the new structure is populated only with changed pages that were not successfully cast out. RECFM. Record format. record. The storage representation of a row or other data. record identifier (RID). A unique identifier that DB2 uses internally to identify a row of data in a table. Compare with row ID.
columns, the record is a fixed-length record. If one or more columns are varying-length columns, the record is a varying-length column. Recoverable Resource Manager Services attachment facility (RRSAF). A DB2 subcomponent that uses Resource Recovery Services to coordinate resource commitment between DB2 and all other resource managers that also use RRS in a z/OS system. recovery. The process of rebuilding databases after a system failure. recovery log. A collection of records that describes the events that occur during DB2 execution and indicates their sequence. The recorded information is used for recovery in the event of a failure during DB2 execution. recovery manager. (1) A subcomponent that supplies coordination services that control the interaction of DB2 resource managers during commit, abort, checkpoint, and restart processes. The recovery manager also supports the recovery mechanisms of other subsystems (for example, IMS) by acting as a participant in the other subsystems process for protecting data that has reached a point of consistency. (2) A coordinator or a participant (or both), in the execution of a two-phase commit, that can access a recovery log that maintains the state of the logical unit of work and names the immediate upstream coordinator and downstream participants. recovery pending (RECP). A condition that prevents SQL access to a table space that needs to be recovered. recovery token. An identifier for an element that is used in recovery (for example, NID or URID). RECP. Recovery pending. redo. A state of a unit of recovery that indicates that changes are to be reapplied to the disk media to ensure data integrity. reentrant. Executable code that can reside in storage as one shared copy for all threads. Reentrant code is not self-modifying and provides separate storage areas for each thread. Reentrancy is a compiler and operating system concept, and reentrancy alone is not enough to guarantee logically consistent results when multithreading. See also threadsafe. referential constraint. The requirement that nonnull values of a designated foreign key are valid only if they equal values of the primary key of a designated table. referential integrity. The state of a database in which all values of all foreign keys are valid. Maintaining referential integrity requires the enforcement of referential constraints on all operations that change the data in a table on which the referential constraints are defined.
| record identifier (RID) pool. An area of main storage | that is used for sorting record identifiers during | list-prefetch processing.
record length. The sum of the length of all the columns in a table, which is the length of the data as it is physically stored in the database. Records can be fixed length or varying length, depending on how the columns are defined. If all columns are fixed-length
1210
referential structure. A set of tables and relationships that includes at least one table and, for every table in the set, all the relationships in which that table participates and all the tables to which it is related.
reoptimization, DB2 uses the values of host variables, parameter markers, or special registers. REORG pending (REORP). A condition that restricts SQL access and most utility access to an object that must be reorganized. REORP. REORG pending. repeatable read (RR). The isolation level that provides maximum protection from other executing application programs. When an application program executes with repeatable read protection, rows that the program references cannot be changed by other programs until the program reaches a commit point. repeating group. A situation in which an entity includes multiple attributes that are inherently the same. The presence of a repeating group violates the requirement of first normal form. In an entity that satisfies the requirement of first normal form, each attribute is independent and unique in its meaning and its name. See also normalization. replay detection mechanism. A method that allows a principal to detect whether a request is a valid request from a source that can be trusted or whether an untrustworthy entity has captured information from a previous exchange and is replaying the information exchange to gain access to the principal. request commit. The vote that is submitted to the prepare phase if the participant has modified data and is prepared to commit or roll back. requester. The source of a request to access data at a remote server. In the DB2 environment, the requester function is provided by the distributed data facility. resource. The object of a lock or claim, which could be a table space, an index space, a data partition, an index partition, or a logical partition. resource allocation. The part of plan allocation that deals specifically with the database resources. resource control table (RCT). A construct of the CICS attachment facility, created by site-provided macro parameters, that defines authorization and access attributes for transactions or transaction groups. resource definition online. A CICS feature that you use to define CICS resources online without assembling tables. resource limit facility (RLF). A portion of DB2 code that prevents dynamic manipulative SQL statements from exceeding specified time limits. The resource limit facility is sometimes called the governor. resource limit specification table (RLST). A site-defined table that specifies the limits to be enforced by the resource limit facility.
Glossary
| | |
refresh age. The time duration between the current time and the time during which a materialized query table was last refreshed. registry. See registry database. registry database. A database of security information about principals, groups, organizations, accounts, and security policies. relational database (RDB). A database that can be perceived as a set of tables and manipulated in accordance with the relational model of data. relational database management system (RDBMS). A collection of hardware and software that organizes and provides access to a relational database. relational database name (RDBNAM). A unique identifier for an RDBMS within a network. In DB2, this must be the value in the LOCATION column of table SYSIBM.LOCATIONS in the CDB. DB2 publications refer to the name of another RDBMS as a LOCATION value or a location name. relationship. A defined connection between the rows of a table or the rows of two tables. A relationship is the internal representation of a referential constraint. relative byte address (RBA). The offset of a data record or control interval from the beginning of the storage space that is allocated to the data set or file to which it belongs. remigration. The process of returning to a current release of DB2 following a fallback to a previous release. This procedure constitutes another migration process. remote. Any object that is maintained by a remote DB2 subsystem (that is, by a DB2 subsystem other than the local one). A remote view, for example, is a view that is maintained by a remote DB2 subsystem. Contrast with local. remote attach request. A request by a remote location to attach to the local DB2 subsystem. Specifically, the request that is sent is an SNA Function Management Header 5. remote subsystem. Any relational DBMS, except the local subsystem, with which the user or application can communicate. The subsystem need not be remote in any physical sense, and might even operate on the same processor under the same z/OS system. reoptimization. The DB2 process of reconsidering the access path of an SQL statement at run time; during
1211
resource manager. (1) A function that is responsible for managing a particular resource and that guarantees the consistency of all updates made to recoverable resources within a logical unit of work. The resource that is being managed can be physical (for example, disk or main storage) or logical (for example, a particular type of system service). (2) A participant, in the execution of a two-phase commit, that has recoverable resources that could have been modified. The resource manager has access to a recovery log so that it can commit or roll back the effects of the logical unit of work to the recoverable resources. restart pending (RESTP). A restrictive state of a page set or partition that indicates that restart (backout) work needs to be performed on the object. All access to the page set or partition is denied except for access by the: v RECOVER POSTPONED command v Automatic online backout (which DB2 invokes after restart if the system parameter LBACKOUT=AUTO) RESTP. Restart pending. result set. The set of rows that a stored procedure returns to a client application. result set locator. A 4-byte value that DB2 uses to uniquely identify a query result set that a stored procedure returns. result table. The set of rows that are specified by a SELECT statement. retained lock. A MODIFY lock that a DB2 subsystem was holding at the time of a subsystem failure. The lock is retained in the coupling facility lock structure across a DB2 failure. RID. Record identifier. RID pool. Record identifier pool. right outer join. The result of a join operation that includes the matched rows of both tables that are being joined and preserves the unmatched rows of the second join operand. See also join. RLF. Resource limit facility. RLST. Resource limit specification table. RMID. Resource manager identifier. RO. Read-only access. rollback. The process of restoring data that was changed by SQL statements to the state at its last commit point. All locks are freed. Contrast with commit. root page. The index page that is at the highest level (or the beginning point) in an index.
routine. A term that refers to either a user-defined function or a stored procedure. row. The horizontal component of a table. A row consists of a sequence of values, one for each column of the table. ROWID. Row identifier. row identifier (ROWID). A value that uniquely identifies a row. This value is stored with the row and never changes. row lock. A lock on a single row of data.
| rowset-positioned access. The ability to retrieve | multiple rows from a single FETCH statement. | row-positioned access. The ability to retrieve a single | row from a single FETCH statement.
row trigger. A trigger that is defined with the trigger granularity FOR EACH ROW. RRE. Residual recovery entry (in IMS). RRSAF. Recoverable Resource Manager Services attachment facility. RS. Read stability. RTT. Resource translation table. RURE. Restart URE.
S
savepoint. A named entity that represents the state of data and schemas at a particular point in time within a unit of work. SQL statements exist to set a savepoint, release a savepoint, and restore data and schemas to the state that the savepoint represents. The restoration of data and schemas to a savepoint is usually referred to as rolling back to a savepoint. SBCS. Single-byte character set. SCA. Shared communications area.
# # # # #
scalar function. An SQL operation that produces a single value from another value and is expressed as a function name, followed by a list of arguments that are enclosed in parentheses. Contrast with aggregate function.
1212
scale. In SQL, the number of digits to the right of the decimal point (called the precision in the C language). The DB2 library uses the SQL definition.
self-referencing constraint. A referential constraint that defines a relationship in which a table is a dependent of itself. self-referencing table. A table with a self-referencing constraint.
| | | | | | | | |
schema. (1) The organization or structure of a database. (2) A logical grouping for user-defined functions, distinct types, triggers, and stored procedures. When an object of one of these types is created, it is assigned to one schema, which is determined by the name of the object. For example, the following statement creates a distinct type T in schema C: CREATE DISTINCT TYPE C.T ... scrollability. The ability to use a cursor to fetch in either a forward or backward direction. The FETCH statement supports multiple fetch orientations to indicate the new position of the cursor. See also fetch orientation. scrollable cursor. A cursor that can be moved in both a forward and a backward direction. SDWA. System diagnostic work area. search condition. A criterion for selecting rows from a table. A search condition consists of one or more predicates. secondary authorization ID. An authorization ID that has been associated with a primary authorization ID by an authorization exit routine. secondary group buffer pool. For a duplexed group buffer pool, the structure that is used to back up changed pages that are written to the primary group buffer pool. No page registration or cross-invalidation occurs using the secondary group buffer pool. The z/OS equivalent is new structure.
| sensitive cursor. A cursor that is sensitive to changes | that are made to the database after the result table has | been materialized. | sequence. A user-defined object that generates a | sequence of numeric values according to user | specifications.
sequential data set. A non-DB2 data set whose records are organized on the basis of their successive physical positions, such as on magnetic tape. Several of the DB2 database utilities require sequential data sets. sequential prefetch. A mechanism that triggers consecutive asynchronous I/O operations. Pages are fetched before they are required, and several pages are read with a single I/O operation. serial cursor. A cursor that can be moved only in a forward direction. serialized profile. A Java object that contains SQL statements and descriptions of host variables. The SQLJ translator produces a serialized profile for each connection context. server. The target of a request from a remote requester. In the DB2 environment, the server function is provided by the distributed data facility, which is used to access DB2 data from remote applications. server-side programming. A method for adding DB2 data into dynamic Web pages. service class. An eight-character identifier that is used by the z/OS Workload Manager to associate user performance goals with a particular DDF thread or stored procedure. A service class is also used to classify work on parallelism assistants. service request block. A unit of work that is scheduled to execute in another address space. session. A link between two nodes in a VTAM network. session protocols. The available set of SNA communication requests and responses. shared communications area (SCA). A coupling facility list structure that a DB2 data sharing group uses for inter-DB2 communication. share lock. A lock that prevents concurrently executing application processes from changing data, but not from reading data. Contrast with exclusive lock.
| |
secondary index. A nonpartitioning index on a partitioned table. section. The segment of a plan or package that contains the executable structures for a single SQL statement. For most SQL statements, one section in the plan exists for each SQL statement in the source program. However, for cursor-related statements, the DECLARE, OPEN, FETCH, and CLOSE statements reference the same section because they each refer to the SELECT statement that is named in the DECLARE CURSOR statement. SQL statements such as COMMIT, ROLLBACK, and some SET statements do not use a section. segment. A group of pages that holds rows of a single table. See also segmented table space. segmented table space. A table space that is divided into equal-sized groups of pages called segments. Segments are assigned to tables so that rows of different tables are never stored in the same segment.
Glossary
1213
shift-in character. A special control character (X'0F') that is used in EBCDIC systems to denote that the subsequent bytes represent SBCS characters. See also shift-out character. shift-out character. A special control character (X'0E') that is used in EBCDIC systems to denote that the subsequent bytes, up to the next shift-in control character, represent DBCS characters. See also shift-in character. sign-on. A request that is made on behalf of an individual CICS or IMS application process by an attachment facility to enable DB2 to verify that it is authorized to use DB2 resources. simple page set. A nonpartitioned page set. A simple page set initially consists of a single data set (page set piece). If and when that data set is extended to 2 GB, another data set is created, and so on, up to a total of 32 data sets. DB2 considers the data sets to be a single contiguous linear address space containing a maximum of 64 GB. Data is stored in the next available location within this address space without regard to any partitioning scheme. simple table space. A table space that is neither partitioned nor segmented. single-byte character set (SBCS). A set of characters in which each character is represented by a single byte. Contrast with double-byte character set or multibyte character set. single-precision floating point number. A 32-bit approximate representation of a real number. size. In the C language, the total number of digits in a decimal number (called the precision in SQL). The DB2 library uses the SQL term. SMF. System Management Facilities. SMP/E. System Modification Program/Extended. SMS. Storage Management Subsystem. SNA. Systems Network Architecture. SNA network. The part of a network that conforms to the formats and protocols of Systems Network Architecture (SNA). socket. A callable TCP/IP programming interface that TCP/IP network applications use to communicate with remote TCP/IP partners. sourced function. A function that is implemented by another built-in or user-defined function that is already known to the database manager. This function can be a scalar function or a column (aggregating) function; it returns a single value from a set of values (for example, MAX or AVG). Contrast with built-in function, external function, and SQL function.
source program. A set of host language statements and SQL statements that is processed by an SQL precompiler.
| source table. A table that can be a base table, a view, a | table expression, or a user-defined table function.
source type. An existing type that DB2 uses to internally represent a distinct type. space. A sequence of one or more blank characters. special register. A storage area that DB2 defines for an application process to use for storing information that can be referenced in SQL statements. Examples of special registers are USER and CURRENT DATE. specific function name. A particular user-defined function that is known to the database manager by its specific name. Many specific user-defined functions can have the same function name. When a user-defined function is defined to the database, every function is assigned a specific name that is unique within its schema. Either the user can provide this name, or a default name is used. SPUFI. SQL Processor Using File Input. SQL. Structured Query Language. SQL authorization ID (SQL ID). The authorization ID that is used for checking dynamic SQL statements in some situations. SQLCA. SQL communication area. SQL communication area (SQLCA). A structure that is used to provide an application program with information about the execution of its SQL statements. SQL connection. An association between an application process and a local or remote application server or database server. SQLDA. SQL descriptor area. SQL descriptor area (SQLDA). A structure that describes input variables, output variables, or the columns of a result table. SQL escape character. The symbol that is used to enclose an SQL delimited identifier. This symbol is the double quotation mark ("). See also escape character. SQL function. A user-defined function in which the CREATE FUNCTION statement contains the source code. The source code is a single SQL expression that evaluates to a single value. The SQL user-defined function can return only one parameter. SQL ID. SQL authorization ID. SQLJ. Structured Query Language (SQL) that is embedded in the Java programming language.
1214
SQL path. An ordered list of schema names that are used in the resolution of unqualified references to user-defined functions, distinct types, and stored procedures. In dynamic SQL, the current path is found in the CURRENT PATH special register. In static SQL, it is defined in the PATH bind option. SQL procedure. A user-written program that can be invoked with the SQL CALL statement. Contrast with external procedure. SQL processing conversation. Any conversation that requires access of DB2 data, either through an application or by dynamic query requests. SQL Processor Using File Input (SPUFI). A facility of the TSO attachment subcomponent that enables the DB2I user to execute SQL statements without embedding them in an application program. SQL return code. Either SQLCODE or SQLSTATE. SQL routine. A user-defined function or stored procedure that is based on code that is written in SQL. SQL statement coprocessor. An alternative to the DB2 precompiler that lets the user process SQL statements at compile time. The user invokes an SQL statement coprocessor by specifying a compiler option. SQL string delimiter. A symbol that is used to enclose an SQL string constant. The SQL string delimiter is the apostrophe ('), except in COBOL applications, where the user assigns the symbol, which is either an apostrophe or a double quotation mark ("). SRB. Service request block. SSI. Subsystem interface (in z/OS). SSM. Subsystem member (in IMS). stand-alone. An attribute of a program that means that it is capable of executing separately from DB2, without using DB2 services. star join. A method of joining a dimension column of a fact table to the key column of the corresponding dimension table. See also join, dimension, and star schema. star schema. The combination of a fact table (which contains most of the data) and a number of dimension tables. See also star join, dimension, and dimension table. statement handle. In DB2 ODBC, the data object that contains information about an SQL statement that is managed by DB2 ODBC. This includes information such as dynamic arguments, bindings for dynamic arguments and columns, cursor information, result values, and status information. Each statement handle is associated with the connection handle.
statement string. For a dynamic SQL statement, the character string form of the statement. statement trigger. A trigger that is defined with the trigger granularity FOR EACH STATEMENT.
| | | |
static cursor. A named control structure that does not change the size of the result table or the order of its rows after an application opens the cursor. Contrast with dynamic cursor. static SQL. SQL statements, embedded within a program, that are prepared during the program preparation process (before the program is executed). After being prepared, the SQL statement does not change (although values of host variables that are specified by the statement might change). storage group. A named set of disks on which DB2 data can be stored. stored procedure. A user-written application program that can be invoked through the use of the SQL CALL statement. string. See character string or graphic string. strong typing. A process that guarantees that only user-defined functions and operations that are defined on a distinct type can be applied to that type. For example, you cannot directly compare two currency types, such as Canadian dollars and U.S. dollars. But you can provide a user-defined function to convert one currency to the other and then do the comparison. structure. (1) A name that refers collectively to different types of DB2 objects, such as tables, databases, views, indexes, and table spaces. (2) A construct that uses z/OS to map and manage storage on a coupling facility. See also cache structure, list structure, or lock structure. Structured Query Language (SQL). A standardized language for defining and manipulating data in a relational database. structure owner. In relation to group buffer pools, the DB2 member that is responsible for the following activities: v Coordinating rebuild, checkpoint, and damage assessment processing v Monitoring the group buffer pool threshold and notifying castout owners when the threshold has been reached subcomponent. A group of closely related DB2 modules that work together to provide a general function. subject table. The table for which a trigger is created. When the defined triggering event occurs on this table, the trigger is activated.
Glossary
1215
subpage. The unit into which a physical index page can be divided. subquery. A SELECT statement within the WHERE or HAVING clause of another SQL statement; a nested SQL statement. subselect. That form of a query that does not include an ORDER BY clause, an UPDATE clause, or UNION operators. substitution character. A unique character that is substituted during character conversion for any characters in the source program that do not have a match in the target coding representation. subsystem. A distinct instance of a relational database management system (RDBMS). surrogate pair. A coded representation for a single character that consists of a sequence of two 16-bit code units, in which the first value of the pair is a high-surrogate code unit in the range U+D800 through U+DBFF, and the second value is a low-surrogate code unit in the range U+DC00 through U+DFFF. Surrogate pairs provide an extension mechanism for encoding 917 476 characters without requiring the use of 32-bit characters. SVC dump. A dump that is issued when a z/OS or a DB2 functional recovery routine detects an error. sync point. See commit point. syncpoint tree. The tree of recovery managers and resource managers that are involved in a logical unit of work, starting with the recovery manager, that make the final commit decision. synonym. In SQL, an alternative name for a table or view. Synonyms can be used to refer only to objects at the subsystem in which the synonym is defined. syntactic character set. A set of 81 graphic characters that are registered in the IBM registry as character set 00640. This set was originally recommended to the programming language community to be used for syntactic purposes toward maximizing portability and interchangeability across systems and country boundaries. It is contained in most of the primary registered character sets, with a few exceptions. See also invariant character set. Sysplex. See Parallel Sysplex. Sysplex query parallelism. Parallel execution of a single query that is accomplished by using multiple tasks on more than one DB2 subsystem. See also query CP parallelism. system administrator. The person at a computer installation who designs, controls, and manages the use of the computer system.
system agent. A work request that DB2 creates internally such as prefetch processing, deferred writes, and service tasks. system conversation. The conversation that two DB2 subsystems must establish to process system messages before any distributed processing can begin. system diagnostic work area (SDWA). The data that is recorded in a SYS1.LOGREC entry that describes a program or hardware error. system-directed connection. A connection that a relational DBMS manages by processing SQL statements with three-part names. System Modification Program/Extended (SMP/E). A z/OS tool for making software changes in programming systems (such as DB2) and for controlling those changes. Systems Network Architecture (SNA). The description of the logical structure, formats, protocols, and operational sequences for transmitting information through and controlling the configuration and operation of networks. SYS1.DUMPxx data set. A data set that contains a system dump (in z/OS). SYS1.LOGREC. A service aid that contains important information about program and hardware errors (in z/OS).
T
table. A named data object consisting of a specific number of columns and some number of unordered rows. See also base table or temporary table.
| | | | | |
table-controlled partitioning. A type of partitioning in which partition boundaries for a partitioned table are controlled by values that are defined in the CREATE TABLE statement. Partition limits are saved in the LIMITKEY_INTERNAL column of the SYSIBM.SYSTABLEPART catalog table. table function. A function that receives a set of arguments and returns a table to the SQL statement that references the function. A table function can be referenced only in the FROM clause of a subselect. table locator. A mechanism that allows access to trigger transition tables in the FROM clause of SELECT statements, in the subselect of INSERT statements, or from within user-defined functions. A table locator is a fullword integer value that represents a transition table. table space. A page set that is used to store the records in one or more tables.
1216
table space set. A set of table spaces and partitions that should be recovered together for one of these reasons: v Each of them contains a table that is a parent or descendent of a table in one of the others. v The set contains a base table and associated auxiliary tables. A table space set can contain both types of relationships. task control block (TCB). A z/OS control block that is used to communicate information about tasks within an address space that are connected to DB2. See also address space connection. TB. Terabyte (1 099 511 627 776 bytes). TCB. Task control block (in z/OS). TCP/IP. A network communication protocol that computer systems use to exchange information across telecommunication links. TCP/IP port. A 2-byte value that identifies an end user or a TCP/IP network application within a TCP/IP host. template. A DB2 utilities output data set descriptor that is used for dynamic allocation. A template is defined by the TEMPLATE utility control statement. temporary table. A table that holds temporary data. Temporary tables are useful for holding or sorting intermediate results from queries that contain a large number of rows. The two types of temporary table, which are created by different SQL statements, are the created temporary table and the declared temporary table. Contrast with result table. See also created temporary table and declared temporary table. Terminal Monitor Program (TMP). A program that provides an interface between terminal users and command processors and has access to many system services (in z/OS). thread. The DB2 structure that describes an applications connection, traces its progress, processes resource functions, and delimits its accessibility to DB2 resources and services. Most DB2 functions execute under a thread structure. See also allied thread and database access thread. threadsafe. A characteristic of code that allows multithreading both by providing private storage areas for each thread, and by properly serializing shared (global) storage areas. three-part name. The full name of a table, view, or alias. It consists of a location name, authorization ID, and an object name, separated by a period. time. A three-part value that designates a time of day in hours, minutes, and seconds.
time duration. A decimal integer that represents a number of hours, minutes, and seconds. timeout. Abnormal termination of either the DB2 subsystem or of an application because of the unavailability of resources. Installation specifications are set to determine both the amount of time DB2 is to wait for IRLM services after starting, and the amount of time IRLM is to wait if a resource that an application requests is unavailable. If either of these time specifications is exceeded, a timeout is declared. Time-Sharing Option (TSO). An option in MVS that provides interactive time sharing from remote terminals. timestamp. A seven-part value that consists of a date and time. The timestamp is expressed in years, months, days, hours, minutes, seconds, and microseconds. TMP. Terminal Monitor Program. to-do. A state of a unit of recovery that indicates that the unit of recoverys changes to recoverable DB2 resources are indoubt and must either be applied to the disk media or backed out, as determined by the commit coordinator. trace. A DB2 facility that provides the ability to monitor and collect DB2 monitoring, auditing, performance, accounting, statistics, and serviceability (global) data. transaction lock. A lock that is used to control concurrent execution of SQL statements. transaction program name. In SNA LU 6.2 conversations, the name of the program at the remote logical unit that is to be the other half of the conversation.
| transient XML data type. A data type for XML values | that exists only during query processing.
transition table. A temporary table that contains all the affected rows of the subject table in their state before or after the triggering event occurs. Triggered SQL statements in the trigger definition can reference the table of changed rows in the old state or the new state. transition variable. A variable that contains a column value of the affected row of the subject table in its state before or after the triggering event occurs. Triggered SQL statements in the trigger definition can reference the set of old values or the set of new values.
| tree structure. A data structure that represents entities | in nodes, with a most one parent node for each node, | and with only one root node.
Glossary
1217
trigger. A set of SQL statements that are stored in a DB2 database and executed when a certain event occurs in a DB2 table. trigger activation. The process that occurs when the trigger event that is defined in a trigger definition is executed. Trigger activation consists of the evaluation of the triggered action condition and conditional execution of the triggered SQL statements. trigger activation time. An indication in the trigger definition of whether the trigger should be activated before or after the triggered event. trigger body. The set of SQL statements that is executed when a trigger is activated and its triggered action condition evaluates to true. A trigger body is also called triggered SQL statements. trigger cascading. The process that occurs when the triggered action of a trigger causes the activation of another trigger. triggered action. The SQL logic that is performed when a trigger is activated. The triggered action consists of an optional triggered action condition and a set of triggered SQL statements that are executed only if the condition evaluates to true. triggered action condition. An optional part of the triggered action. This Boolean condition appears as a WHEN clause and specifies a condition that DB2 evaluates to determine if the triggered SQL statements should be executed. triggered SQL statements. The set of SQL statements that is executed when a trigger is activated and its triggered action condition evaluates to true. Triggered SQL statements are also called the trigger body. trigger granularity. A characteristic of a trigger, which determines whether the trigger is activated: v Only once for the triggering SQL statement v Once for each row that the SQL statement modifies triggering event. The specified operation in a trigger definition that causes the activation of that trigger. The triggering event is comprised of a triggering operation (INSERT, UPDATE, or DELETE) and a subject table on which the operation is performed. triggering SQL operation. The SQL operation that causes a trigger to be activated when performed on the subject table. trigger package. A package that is created when a CREATE TRIGGER statement is executed. The package is executed when the trigger is activated. TSO. Time-Sharing Option. TSO attachment facility. A DB2 facility consisting of the DSN command processor and DB2I. Applications
that are not written for the CICS or IMS environments can run under the TSO attachment facility. typed parameter marker. A parameter marker that is specified along with its target data type. It has the general form: CAST(? AS data-type) type 1 indexes. Indexes that were created by a release of DB2 before DB2 Version 4 or that are specified as type 1 indexes in Version 4. Contrast with type 2 indexes. As of Version 8, type 1 indexes are no longer supported. type 2 indexes. Indexes that are created on a release of DB2 after Version 7 or that are specified as type 2 indexes in Version 4 or later.
U
UCS-2. Universal Character Set, coded in 2 octets, which means that characters are represented in 16-bits per character. UDF. User-defined function. UDT. User-defined data type. In DB2 UDB for z/OS, the term distinct type is used instead of user-defined data type. See distinct type. uncommitted read (UR). The isolation level that allows an application to read uncommitted data. underlying view. The view on which another view is directly or indirectly defined. undo. A state of a unit of recovery that indicates that the changes that the unit of recovery made to recoverable DB2 resources must be backed out. Unicode. A standard that parallels the ISO-10646 standard. Several implementations of the Unicode standard exist, all of which have the ability to represent a large percentage of the characters that are contained in the many scripts that are used throughout the world. uniform resource locator (URL). A Web address, which offers a way of naming and locating specific items on the Web. union. An SQL operation that combines the results of two SELECT statements. Unions are often used to merge lists of values that are obtained from several tables. unique constraint. An SQL rule that no two values in a primary key, or in the key of a unique index, can be the same. unique index. An index that ensures that no identical key values are stored in a column or a set of columns in a table.
1218
unit of recovery. A recoverable sequence of operations within a single resource manager, such as an instance of DB2. Contrast with unit of work. unit of recovery identifier (URID). The LOGRBA of the first log record for a unit of recovery. The URID also appears in all subsequent log records for that unit of recovery. unit of work. A recoverable sequence of operations within an application process. At any time, an application process is a single unit of work, but the life of an application process can involve many units of work as a result of commit or rollback operations. In a multisite update operation, a single unit of work can include several units of recovery. Contrast with unit of recovery. Universal Unique Identifier (UUID). An identifier that is immutable and unique across time and space (in z/OS). unlock. The act of releasing an object or system resource that was previously locked and returning it to general availability within DB2. untyped parameter marker. A parameter marker that is specified without its target data type. It has the form of a single question mark (?). updatability. The ability of a cursor to perform positioned updates and deletes. The updatability of a cursor can be influenced by the SELECT statement and the cursor sensitivity option that is specified on the DECLARE CURSOR statement. update hole. The location on which a cursor is positioned when a row in a result table is fetched again and the new values no longer satisfy the search condition. DB2 marks a row in the result table as an update hole when an update to the corresponding row in the database causes that row to no longer qualify for the result table. update trigger. A trigger that is defined with the triggering SQL operation UPDATE. upstream. The node in the syncpoint tree that is responsible, in addition to other recovery or resource managers, for coordinating the execution of a two-phase commit. UR. Uncommitted read. URE. Unit of recovery element. URID . Unit of recovery identifier. URL. Uniform resource locator. user-defined data type (UDT). See distinct type. user-defined function (UDF). A function that is defined to DB2 by using the CREATE FUNCTION
statement and that can be referenced thereafter in SQL statements. A user-defined function can be an external function, a sourced function, or an SQL function. Contrast with built-in function. user view. In logical data modeling, a model or representation of critical information that the business requires. UTF-8. Unicode Transformation Format, 8-bit encoding form, which is designed for ease of use with existing ASCII-based systems. The CCSID value for data in UTF-8 format is 1208. DB2 UDB for z/OS supports UTF-8 in mixed data fields. UTF-16. Unicode Transformation Format, 16-bit encoding form, which is designed to provide code values for over a million characters and a superset of UCS-2. The CCSID value for data in UTF-16 format is 1200. DB2 UDB for z/OS supports UTF-16 in graphic data fields. UUID. Universal Unique Identifier.
V
value. The smallest unit of data that is manipulated in SQL. variable. A data element that specifies a value that can be changed. A COBOL elementary data item is an example of a variable. Contrast with constant. variant function. See nondeterministic function. varying-length string. A character or graphic string whose length varies within set limits. Contrast with fixed-length string. version. A member of a set of similar programs, DBRMs, packages, or LOBs. A version of a program is the source code that is produced by precompiling the program. The program version is identified by the program name and a timestamp (consistency token). A version of a DBRM is the DBRM that is produced by precompiling a program. The DBRM version is identified by the same program name and timestamp as a corresponding program version. A version of a package is the result of binding a DBRM within a particular database system. The package version is identified by the same program name and consistency token as the DBRM. A version of a LOB is a copy of a LOB value at a point in time. The version number for a LOB is stored in the auxiliary index entry for the LOB. view. An alternative representation of data from one or more tables. A view can include all or some of the columns that are contained in tables on which it is defined.
Glossary
1219
view check option. An option that specifies whether every row that is inserted or updated through a view must conform to the definition of that view. A view check option can be specified with the WITH CASCADED CHECK OPTION, WITH CHECK OPTION, or WITH LOCAL CHECK OPTION clauses of the CREATE VIEW statement. Virtual Storage Access Method (VSAM). An access method for direct or sequential processing of fixed- and varying-length records on disk devices. The records in a VSAM data set or file can be organized in logical sequence by a key field (key sequence), in the physical sequence in which they are written on the data set or file (entry-sequence), or by relative-record number (in z/OS). Virtual Telecommunications Access Method (VTAM). An IBM licensed program that controls communication and the flow of data in an SNA network (in z/OS).
| XML attribute. A name-value pair within a tagged | XML element that modifies certain features of the | element. # # # #
XML element. A logical structure in an XML document that is delimited by a start and an end tag. Anything between the start tag and the end tag is the content of the element.
| XML node. The smallest unit of valid, complete | structure in a document. For example, a node can | represent an element, an attribute, or a text string. | XML publishing functions. Functions that return | XML values from SQL values.
X/Open. An independent, worldwide open systems organization that is supported by most of the worlds largest information systems suppliers, user organizations, and software companies. X/Open's goal is to increase the portability of applications by combining existing and emerging standards. XRF. Extended recovery facility.
| volatile table. A table for which SQL operations | choose index access whenever possible.
VSAM. Virtual Storage Access Method. VTAM. Virtual Telecommunication Access Method (in z/OS).
Z
| z/OS. An operating system for the eServer product | line that supports 64-bit real and virtual storage.
z/OS Distributed Computing Environment (z/OS DCE). A set of technologies that are provided by the Open Software Foundation to implement distributed computing.
W
warm start. The normal DB2 restart process, which involves reading and processing log records so that data that is under the control of DB2 is consistent. Contrast with cold start. WLM application environment. A z/OS Workload Manager attribute that is associated with one or more stored procedures. The WLM application environment determines the address space in which a given DB2 stored procedure runs. write to operator (WTO). An optional user-coded service that allows a message to be written to the system console operator informing the operator of errors and unusual system conditions that might need to be corrected (in z/OS). WTO. Write to operator. WTOR. Write to operator (WTO) with reply.
X
XCF. See cross-system coupling facility. XES. See cross-system extended services.
1220
Bibliography
DB2 Universal Database for z/OS Version 8 product information: v DB2 Administration Guide, SC18-7413 v DB2 Application Programming and SQL Guide, SC18-7415 v DB2 Application Programming Guide and Reference for Java, SC18-7414 v DB2 Codes, GC18-9603 v DB2 Command Reference, SC18-7416 v DB2 Common Criteria Guide, SC18-9672 v DB2 Data Sharing: Planning and Administration, SC18-7417 v DB2 Diagnosis Guide and Reference, LY37-3201 v DB2 Diagnostic Quick Reference Card, LY37-3202 v DB2 Image, Audio, and Video Extenders Administration and Programming, SC26-9947 v DB2 Installation Guide, GC18-7418 v DB2 Licensed Program Specifications, GC18-7420 v DB2 Management Clients Package Program Directory, GI10-8567 v DB2 Messages, GC18-9602 v DB2 ODBC Guide and Reference, SC18-7423 v The Official Introduction to DB2 UDB for z/OS v DB2 Program Directory, GI10-8566 v DB2 RACF Access Control Module Guide, SC18-7433 v DB2 Reference for Remote DRDA Requesters and Servers, SC18-7424 v DB2 Reference Summary, SX26-3853 v DB2 Release Planning Guide, SC18-7425 v DB2 SQL Reference, SC18-7426 v DB2 Text Extender Administration and Programming, SC26-9948 # v DB2 Utility Guide and Reference, SC18-7427 v DB2 What's New?, GC18-7428 v DB2 XML Extender for z/OS Administration and Programming, SC18-7431 # Books and resources about related products: APL2 v APL2 Programming Guide, SH21-1072 v APL2 Programming: Language Reference, SH21-1061
Copyright IBM Corp. 1983, 2008
v APL2 Programming: Using Structured Query Language (SQL), SH21-1057 BookManager READ/MVS v BookManager READ/MVS V1R3: Installation Planning & Customization, SC38-2035 C language: IBM C/C++ for z/OS v z/OS C/C++ Programming Guide, SC09-4765 v z/OS C/C++ Run-Time Library Reference, SA22-7821 Character Data Representation Architecture v Character Data Representation Architecture Overview, GC09-2207 v Character Data Representation Architecture Reference and Registry, SC09-2190 CICS Transaction Server for z/OS The publication order numbers below are for Version 2 Release 2 and Version 2 Release 3 (with the release 2 number listed first). v CICS Transaction Server for z/OS Information Center, SK3T-6903 or SK3T-6957. v CICS Transaction Server for z/OS Application Programming Guide, SC34-5993 or SC34-6231 v CICS Transaction Server for z/OS Application Programming Reference, SC34-5994 or SC34-6232 v CICS Transaction Server for z/OS CICS-RACF Security Guide, SC34-6011 or SC34-6249 v CICS Transaction Server for z/OS CICS Supplied Transactions, SC34-5992 or SC34-6230 v CICS Transaction Server for z/OS Customization Guide, SC34-5989 or SC34-6227 v CICS Transaction Server for z/OS Data Areas, LY33-6100 or LY33-6103 v CICS Transaction Server for z/OS DB2 Guide, SC34-6014 or SC34-6252 v CICS Transaction Server for z/OS External Interfaces Guide, SC34-6006 or SC34-6244 v CICS Transaction Server for z/OS Installation Guide, GC34-5985 or GC34-6224 v CICS Transaction Server for z/OS Intercommunication Guide, SC34-6005 or SC34-6243 v CICS Transaction Server for z/OS Messages and Codes, GC34-6003 or GC34-6241 v CICS Transaction Server for z/OS Operations and Utilities Guide, SC34-5991 or SC34-6229
1221
v CICS Transaction Server for z/OS Performance Guide, SC34-6009 or SC34-6247 v CICS Transaction Server for z/OS Problem Determination Guide, SC34-6002 or SC34-6239 v CICS Transaction Server for z/OS Release Guide, GC34-5983 or GC34-6218 v CICS Transaction Server for z/OS Resource Definition Guide, SC34-5990 or SC34-6228 v CICS Transaction Server for z/OS System Definition Guide, SC34-5988 or SC346226 v CICS Transaction Server for z/OS System Programming Reference, SC34-5595 or SC346233 CICS Transaction Server for OS/390 v CICS Transaction Server for OS/390 Application Programming Guide, SC33-1687 v CICS Transaction Server for OS/390 DB2 Guide, SC33-1939 v CICS Transaction Server for OS/390 External Interfaces Guide, SC33-1944 v CICS Transaction Server for OS/390 Resource Definition Guide, SC33-1684 COBOL: v IBM COBOL Language Reference, SC27-1408 v Enterprise COBOL for z/OS Programming Guide, SC27-1412 Database Design v DB2 for z/OS and OS/390 Development for Performance Volume I by Gabrielle Wiorkowski, Gabrielle & Associates, ISBN 0-96684-605-2 v DB2 for z/OS and OS/390 Development for Performance Volume II by Gabrielle Wiorkowski, Gabrielle & Associates, ISBN 0-96684-606-0 v Handbook of Relational Database Design by C. Fleming and B. Von Halle, Addison Wesley, ISBN 0-20111-434-8 DB2 Administration Tool v DB2 Administration Tool for z/OS User's Guide and Reference, available on the Web at www.ibm.com/software/data/db2imstools/ library.html DB2 Buffer Pool Analyzer for z/OS v DB2 Buffer Pool Tool for z/OS User's Guide and Reference, available on the Web at www.ibm.com/software/data/db2imstools/ library.html DB2 Connect v IBM DB2 Connect Quick Beginnings for DB2 Connect Enterprise Edition, GC09-4833
v IBM DB2 Connect Quick Beginnings for DB2 Connect Personal Edition, GC09-4834 v IBM DB2 Connect User's Guide, SC09-4835 DB2 DataPropagator v DB2 Universal Database Replication Guide and Reference, SC27-1121 DB2 Performance Expert for z/OS, Version 1 The following books are part of the DB2 Performance Expert library. Some of these books include information about the following tools: IBM DB2 Performance Expert for z/OS; IBM DB2 Performance Monitor for z/OS; and DB2 Buffer Pool Analyzer for z/OS. v OMEGAMON Buffer Pool Analyzer User's Guide, SC18-7972 v OMEGAMON Configuration and Customization, SC18-7973 v OMEGAMON Messages, SC18-7974 v OMEGAMON Monitoring Performance from ISPF, SC18-7975 v OMEGAMON Monitoring Performance from Performance Expert Client, SC18-7976 v OMEGAMON Program Directory, GI10-8549 v OMEGAMON Report Command Reference, SC18-7977 v OMEGAMON Report Reference, SC18-7978 v Using IBM Tivoli OMEGAMON XE on z/OS, SC18-7979 DB2 Query Management Facility (QMF) Version 8.1 v DB2 Query Management Facility: DB2 QMF High Performance Option Users Guide for TSO/CICS, SC18-7450 v DB2 Query Management Facility: DB2 QMF Messages and Codes, GC18-7447 v DB2 Query Management Facility: DB2 QMF Reference, SC18-7446 v DB2 Query Management Facility: Developing DB2 QMF Applications, SC18-7651 v DB2 Query Management Facility: Getting Started with DB2 QMF for Windows and DB2 QMF for WebSphere, SC18-7449 v DB2 Query Management Facility: Getting Started with DB2 QMF Query Miner, GC18-7451 v DB2 Query Management Facility: Installing and Managing DB2 QMF for TSO/CICS, GC18-7444 v DB2 Query Management Facility: Installing and Managing DB2 QMF for Windows and DB2 QMF for WebSphere, GC18-7448
1222
v DB2 Query Management Facility: Introducing DB2 QMF, GC18-7443 v DB2 Query Management Facility: Using DB2 QMF, SC18-7445 v DB2 Query Management Facility: DB2 QMF Visionary Developer's Guide, SC18-9093 v DB2 Query Management Facility: DB2 QMF Visionary Getting Started Guide, GC18-9092 DB2 Redbooks For access to all IBM Redbooks about DB2, see the IBM Redbooks Web page at www.ibm.com/redbooks DB2 Server for VSE & VM v DB2 Server for VM: DBS Utility, SC09-2983 DB2 Universal Database Cross-Platform information v IBM DB2 Universal Database SQL Reference for Cross-Platform Development, available at www.ibm.com/software/data/ developer/cpsqlref/ DB2 Universal Database for iSeries The following books are available at www.ibm.com/iseries/infocenter v DB2 Universal Database for iSeries Performance and Query Optimization v DB2 Universal Database for iSeries Database Programming v DB2 Universal Database for iSeries SQL Programming Concepts v DB2 Universal Database for iSeries SQL Programming with Host Languages v DB2 Universal Database for iSeries SQL Reference v DB2 Universal Database for iSeries Distributed Data Management v DB2 Universal Database for iSeries Distributed Database Programming DB2 Universal Database for Linux, UNIX, and Windows: v DB2 Universal Database Administration Guide: Planning, SC09-4822 v DB2 Universal Database Administration Guide: Implementation, SC09-4820 v DB2 Universal Database Administration Guide: Performance, SC09-4821 v DB2 Universal Database Administrative API Reference, SC09-4824 v DB2 Universal Database Application Development Guide: Building and Running Applications, SC09-4825
v DB2 Universal Database Call Level Interface Guide and Reference, Volumes 1 and 2, SC09-4849 and SC09-4850 v DB2 Universal Database Command Reference, SC09-4828 v DB2 Universal Database SQL Reference Volume 1, SC09-4844 v DB2 Universal Database SQL Reference Volume 2, SC09-4845 Device Support Facilities v Device Support Facilities User's Guide and Reference, GC35-0033 DFSMS These books provide information about a variety of components of DFSMS, including z/OS DFSMS, z/OS DFSMSdfp, z/OS DFSMSdss, z/OS DFSMShsm, and z/OS DFP. v z/OS DFSMS Access Method Services for Catalogs, SC26-7394 v z/OS DFSMSdss Storage Administration Guide, SC35-0423 v z/OS DFSMSdss Storage Administration Reference, SC35-0424 v z/OS DFSMShsm Managing Your Own Data, SC35-0420 v z/OS DFSMSdfp: Using DFSMSdfp in the z/OS Environment, SC26-7473 v z/OS DFSMSdfp Diagnosis Reference, GY27-7618 v z/OS DFSMS: Implementing System-Managed Storage, SC27-7407 v z/OS DFSMS: Macro Instructions for Data Sets, SC26-7408 v z/OS DFSMS: Managing Catalogs, SC26-7409 v z/OS MVS: Program Management User's Guide and Reference, SA22-7643 v z/OS MVS Program Management: Advanced Facilities, SA22-7644 v z/OS DFSMSdfp Storage Administration Reference, SC26-7402 v z/OS DFSMS: Using Data Sets, SC26-7410 v DFSMSdfp Advanced Services , SC26-7400 v DFSMS/MVS: Utilities, SC26-7414 DFSORT v DFSORT Application Programming: Guide, SC33-4035 v DFSORT Installation and Customization, SC33-4034 Distributed Relational Database Architecture
Bibliography
1223
v Open Group Technical Standard; the Open Group presently makes the following DRDA books available through its Web site at www.opengroup.org Open Group Technical Standard, DRDA Version 3 Vol. 1: Distributed Relational Database Architecture Open Group Technical Standard, DRDA Version 3 Vol. 2: Formatted Data Object Content Architecture Open Group Technical Standard, DRDA Version 3 Vol. 3: Distributed Data Management Architecture Domain Name System v DNS and BIND, Third Edition, Paul Albitz and Cricket Liu, OReilly, ISBN 0-59600-158-4 Education v Information about IBM educational offerings is available on the Web at http://www.ibm.com/ software/sw-training/ v A collection of glossaries of IBM terms is available on the IBM Terminology Web site at www.ibm.com/ibm/terminology/index.html eServer zSeries v IBM eServer zSeries Processor Resource/System Manager Planning Guide, SB10-7033 Fortran: VS Fortran v VS Fortran Version 2: Language and Library Reference, SC26-4221 v VS Fortran Version 2: Programming Guide for CMS and MVS, SC26-4222 High Level Assembler v High Level Assembler for MVS and VM and VSE Language Reference, SC26-4940 v High Level Assembler for MVS and VM and VSE Programmer's Guide, SC26-4941 ICSF v z/OS ICSF Overview, SA22-7519 v Integrated Cryptographic Service Facility Administrator's Guide, SA22-7521 IMS Version 8 IMS product information is available on the IMS Library Web page, which you can find at www.ibm.com/ims v IMS Administration Guide: System, SC27-1284 v IMS Administration Guide: Transaction Manager, SC27-1285
v IMS Application Programming: Database Manager, SC27-1286 v IMS Application Programming: Design Guide, SC27-1287 v IMS Application Programming: Transaction Manager, SC27-1289 v IMS Command Reference, SC27-1291 v IMS Customization Guide, SC27-1294 v IMS Install Volume 1: Installation Verification, GC27-1297 v IMS Install Volume 2: System Definition and Tailoring, GC27-1298 v IMS Messages and Codes Volumes 1 and 2, GC27-1301 and GC27-1302 v IMS Open Transaction Manager Access Guide and Reference, SC18-7829 v IMS Utilities Reference: System, SC27-1309 General information about IMS Batch Terminal Simulator for z/OS is available on the Web at www.ibm.com/software/data/db2imstools/ library.html IMS DataPropagator v IMS DataPropagator for z/OS Administrator's Guide for Log, SC27-1216 v IMS DataPropagator: An Introduction, GC27-1211 v IMS DataPropagator for z/OS Reference, SC27-1210 ISPF v z/OS ISPF Dialog Developers Guide, SC23-4821 v z/OS ISPF Messages and Codes, SC34-4815 v z/OS ISPF Planning and Customizing, GC34-4814 v z/OS ISPF Users Guide Volumes 1 and 2, SC34-4822 and SC34-4823 Language Environment v Debug Tool User's Guide and Reference, SC18-7171 v Debug Tool for z/OS and OS/390 Reference and Messages, SC18-7172 v z/OS Language Environment Concepts Guide, SA22-7567 v z/OS Language Environment Customization, SA22-7564 v z/OS Language Environment Debugging Guide, GA22-7560 v z/OS Language Environment Programming Guide, SA22-7561 v z/OS Language Environment Programming Reference, SA22-7562 MQSeries v MQSeries Application Messaging Interface, SC34-5604
1224
v MQSeries for OS/390 Concepts and Planning Guide, GC34-5650 v MQSeries for OS/390 System Setup Guide, SC34-5651 National Language Support v National Language Design Guide Volume 1, SE09-8001 v IBM National Language Support Reference Manual Volume 2, SE09-8002 NetView v Tivoli NetView for z/OS Installation: Getting Started, SC31-8872 v Tivoli NetView for z/OS User's Guide, GC31-8849 Microsoft ODBC Information about Microsoft ODBC is available at http://msdn.microsoft.com/library/ Parallel Sysplex Library v System/390 9672 Parallel Transaction Server, 9672 Parallel Enterprise Server, 9674 Coupling Facility System Overview For R1/R2/R3 Based Models, SB10-7033 v z/OS Parallel Sysplex Application Migration, SA22-7662 v z/OS Parallel Sysplex Overview: An Introduction to Data Sharing and Parallelism, SA22-7661 v z/OS Parallel Sysplex Test Report, SA22-7663 The Parallel Sysplex Configuration Assistant is available at www.ibm.com/s390/pso/psotool PL/I: Enterprise PL/I for z/OS v IBM Enterprise PL/I for z/OS Language Reference, SC27-1460 v IBM Enterprise PL/I for z/OS Programming Guide, SC27-1457 PL/I: PL/I for MVS & VM v PL/I for MVS & VM Programming Guide, SC26-3113 SMP/E v SMP/E for z/OS and OS/390 Reference, SA22-7772 v SMP/E for z/OS and OS/390 User's Guide, SA22-7773 Storage Management v z/OS DFSMS: Implementing System-Managed Storage, SC26-7407 v MVS/ESA Storage Management Library: Managing Data, SC26-7397
v MVS/ESA Storage Management Library: Managing Storage Groups, SC35-0421 v MVS Storage Management Library: Storage Management Subsystem Migration Planning Guide, GC26-7398 System Network Architecture (SNA) v SNA Formats, GA27-3136 v SNA LU 6.2 Peer Protocols Reference, SC31-6808 v SNA Transaction Programmer's Reference Manual for LU Type 6.2, GC30-3084 v SNA/Management Services Alert Implementation Guide, GC31-6809 TCP/IP v IBM TCP/IP for MVS: Customization & Administration Guide, SC31-7134 v IBM TCP/IP for MVS: Diagnosis Guide, LY43-0105 v IBM TCP/IP for MVS: Messages and Codes, SC31-7132 v IBM TCP/IP for MVS: Planning and Migration Guide, SC31-7189 TotalStorage Enterprise Storage Server v RAMAC Virtual Array: Implementing Peer-to-Peer Remote Copy, SG24-5680 v Enterprise Storage Server Introduction and Planning, GC26-7444 v IBM RAMAC Virtual Array, SG24-6424 Unicode v z/OS Support for Unicode: Using Conversion Services, SA22-7649 Information about Unicode, the Unicode consortium, the Unicode standard, and standards conformance requirements is available at www.unicode.org VTAM v Planning for NetView, NCP, and VTAM, SC31-8063 v VTAM for MVS/ESA Diagnosis, LY43-0078 v VTAM for MVS/ESA Messages and Codes, GC31-8369 v VTAM for MVS/ESA Network Implementation Guide, SC31-8370 v VTAM for MVS/ESA Operation, SC31-8372 v z/OS Communications Server SNA Programming, SC31-8829 v z/OS Communicatons Server SNA Programmer's LU 6.2 Reference, SC31-8810 v VTAM for MVS/ESA Resource Definition Reference, SC31-8377
Bibliography
1225
WebSphere family v WebSphere MQ Integrator Broker: Administration Guide, SC34-6171 v WebSphere MQ Integrator Broker for z/OS: Customization and Administration Guide, SC34-6175 v WebSphere MQ Integrator Broker: Introduction and Planning, GC34-5599 v WebSphere MQ Integrator Broker: Using the Control Center, SC34-6168 z/Architecture v z/Architecture Principles of Operation, SA22-7832 z/OS v z/OS C/C++ Programming Guide, SC09-4765 v z/OS C/C++ Run-Time Library Reference, SA22-7821 v z/OS C/C++ User's Guide, SC09-4767 v z/OS Communications Server: IP Configuration Guide, SC31-8875 v z/OS DCE Administration Guide, SC24-5904 v z/OS DCE Introduction, GC24-5911 v z/OS DCE Messages and Codes, SC24-5912 v z/OS Information Roadmap, SA22-7500 v z/OS Introduction and Release Guide, GA22-7502 v z/OS JES2 Initialization and Tuning Guide, SA22-7532 v z/OS JES3 Initialization and Tuning Guide, SA22-7549 v z/OS Language Environment Concepts Guide, SA22-7567 v z/OS Language Environment Customization, SA22-7564 v z/OS Language Environment Debugging Guide, GA22-7560 v z/OS Language Environment Programming Guide, SA22-7561 v z/OS Language Environment Programming Reference, SA22-7562 v z/OS Managed System Infrastructure for Setup User's Guide, SC33-7985 v z/OS MVS Diagnosis: Procedures, GA22-7587 v z/OS MVS Diagnosis: Reference, GA22-7588 v z/OS MVS Diagnosis: Tools and Service Aids, GA22-7589 v z/OS MVS Initialization and Tuning Guide, SA22-7591 v z/OS MVS Initialization and Tuning Reference, SA22-7592 v z/OS MVS Installation Exits, SA22-7593 v z/OS MVS JCL Reference, SA22-7597 v z/OS MVS JCL User's Guide, SA22-7598 v z/OS MVS Planning: Global Resource Serialization, SA22-7600 v z/OS MVS Planning: Operations, SA22-7601
v z/OS MVS Planning: Workload Management, SA22-7602 v z/OS MVS Programming: Assembler Services Guide, SA22-7605 v z/OS MVS Programming: Assembler Services Reference, Volumes 1 and 2, SA22-7606 and SA22-7607 v z/OS MVS Programming: Authorized Assembler Services Guide, SA22-7608 v z/OS MVS Programming: Authorized Assembler Services Reference Volumes 1-4, SA22-7609, SA22-7610, SA22-7611, and SA22-7612 v z/OS MVS Programming: Callable Services for High-Level Languages, SA22-7613 v z/OS MVS Programming: Extended Addressability Guide, SA22-7614 v z/OS MVS Programming: Sysplex Services Guide, SA22-7617 v z/OS MVS Programming: Sysplex Services Reference, SA22-7618 v z/OS MVS Programming: Workload Management Services, SA22-7619 v z/OS MVS Recovery and Reconfiguration Guide, SA22-7623 v z/OS MVS Routing and Descriptor Codes, SA22-7624 v z/OS MVS Setting Up a Sysplex, SA22-7625 v z/OS MVS System Codes SA22-7626 v z/OS MVS System Commands, SA22-7627 v z/OS MVS System Messages Volumes 1-10, SA22-7631, SA22-7632, SA22-7633, SA22-7634, SA22-7635, SA22-7636, SA22-7637, SA22-7638, SA22-7639, and SA22-7640 v z/OS MVS Using the Subsystem Interface, SA22-7642 v z/OS Planning for Multilevel Security and the Common Criteria, SA22-7509 v z/OS RMF User's Guide, SC33-7990 v z/OS Security Server Network Authentication Server Administration, SC24-5926 v z/OS Security Server RACF Auditor's Guide, SA22-7684 v z/OS Security Server RACF Command Language Reference, SA22-7687 v z/OS Security Server RACF Macros and Interfaces, SA22-7682 v z/OS Security Server RACF Security Administrator's Guide, SA22-7683 v z/OS Security Server RACF System Programmer's Guide, SA22-7681 v z/OS Security Server RACROUTE Macro Reference, SA22-7692 v z/OS Support for Unicode: Using Conversion Services, SA22-7649 v z/OS TSO/E CLISTs, SA22-7781 v z/OS TSO/E Command Reference, SA22-7782
1226
v v v v v v v v v v v
z/OS TSO/E Customization, SA22-7783 z/OS TSO/E Messages, SA22-7786 z/OS TSO/E Programming Guide, SA22-7788 z/OS TSO/E Programming Services, SA22-7789 z/OS TSO/E REXX Reference, SA22-7790 z/OS TSO/E User's Guide, SA22-7794 z/OS UNIX System Services Command Reference, SA22-7802 z/OS UNIX System Services Messages and Codes, SA22-7807 z/OS UNIX System Services Planning, GA22-7800 z/OS UNIX System Services Programming: Assembler Callable Services Reference, SA22-7803 z/OS UNIX System Services User's Guide, SA22-7801
Bibliography
1227
1228
484
A
abend before commit point 435 DB2 869, 901 effect on cursor position 123 exit routines 884 for synchronization calls 576 IMS U0102 581 U0775 437 U0778 438, 439 multiple-mode program 434 program 432 reason codes 885 return code posted to CAF CONNECT 871 return code posted to RRSAF CONNECT 904 single-mode program 434 system X04E 573 ABRT parameter of CAF (call attachment facility) 877, 886 access path affects lock attributes 424 direct row access 270, 803 index-only access 802 low cluster ratio suggests table space scan 809 with list prefetch 832 multiple index access description 812 PLAN_TABLE 801 selection influencing with SQL 774 problems 731 queries containing host variables 759 Visual Explain 775, 789 table space scan 809 unique index with matching value 814 ACQUIRE option of BIND PLAN subcommand locking tables and table spaces 408 activity sample table 993 address space examples 932 initialization CAF CONNECT command 873 CAF OPEN command 875 sample scenarios 882 Copyright IBM Corp. 1983, 2008
X-1
application program (continued) preparation (continued) using DB2 coprocessor 382 using DB2 precompiler 381 using DB2I (DB2 Interactive) 518 running CAF (call attachment facility) 863 CICS 512 IMS 511 program synchronization in DL/I batch 575 TSO 508 TSO CLIST 511 suspension description 394 test environment 557 testing 557 arithmetic expressions in UPDATE statement 37 AS clause naming columns for view 7 naming columns in union 8 naming derived columns 8 naming result columns 7 ORDER BY name 7 with ORDER BY clause 10 ASCII data, retrieving 619 assembler application program assembling 494 character host variable 149 coding SQL statements 143 data type compatibility 155 data types 151 declaring tables 145 declaring views 145 fixed-length character string 149 graphic host variable 150 host variable declaring 148 naming convention 145 INCLUDE statement 145 including SQLCA 144 including SQLDA 144 indicator variable 156 LOB variable 151 numeric host variable 149 reentrant 146 result set locator 150 ROWID variable 151 SQLCODE 143 SQLSTATE 143 table locator 150 variable declaration 154 varying-length character string 149 assignment, compatibility rules 4 ASSOCIATE LOCATORS statement 709 ATTACH option CAF 867 precompiler 867, 900 RRSAF 900 ATTACH precompiler option 484 attention processing 862, 883, 894 AUTH SIGNON, RRSAF syntax 914 usage 914 authority authorization ID 510 creating test tables 559 SYSIBM.SYSTABAUTH table 18
AUTOCOMMIT field of SPUFI panel automatic rebind EXPLAIN processing 799 automatic query rewrite 267 automatic rebind conditions for 390 invalid plan or package 389 SQLCA not available 391 auxiliary table LOCK TABLE statement 428
61
B
batch processing access to DB2 and DL/I together binding a plan 578 checkpoint calls 575 commits 575 precompiling 578 batch DB2 application running 510 starting with a CLIST 511 bill of materials applications 1097 binary large object (BLOB) 297 BIND PACKAGE subcommand of DSN options CURRENTDATA 447 DBPROTOCOL 447 ENCODING 447 ISOLATION 412 KEEPDYNAMIC 600 location-name 446 OPTIONS 447 RELEASE 408 REOPT(ALWAYS) 760 REOPT(NONE) 760 REOPT(ONCE) 760 SQLERROR 446 options associated with DRDA access 446, 447 remote 496 BIND PLAN subcommand of DSN options ACQUIRE 408 CACHESIZE 504 CURRENTDATA 446 DBPROTOCOL 446 DISCONNECT 445 ENCODING 446 ISOLATION 412 KEEPDYNAMIC 600 RELEASE 408 REOPT(ALWAYS) 760 REOPT(NONE) 760 REOPT(ONCE) 760 SQLRULES 446, 505 options associated with DRDA access 445 remote 496 binding advantages of packages 385 application plans 495 changes that require 386 checking BIND PACKAGE options 447 DBRMs precompiled elsewhere 478 method DBRMs and package list 385 DBRMs into single plan 385
X-2
binding (continued) method (continued) package list only 384 options associated with DRDA access 446 options for 384 packages deciding how to use 384 in use 384 remote 496 planning for 384 plans in use 384 including DBRMs 498 including packages 498 options 498 remote package requirements 496 specify SQL rules 505 block fetch conditions for use by non-scrollable cursor 459 conditions for use by scrollable cursor 459 preventing 467 using 458 with cursor stability 467 BMP (batch message processing) program checkpoints 436, 437 transaction-oriented 436 BTS (batch terminal simulator) 562
C
C application program declaring tables 160 LOB variable 167 LOB variable array 172 numeric host variable 162 numeric host variable array 168 result set locator 166 sample application 1013 C/C++ application program character host variable 162 character host variable array 169 coding considerations 186 coding SQL statements 158 constants 180 data type compatibility 181 data types 175 DB2 coprocessor 478 DCLGEN support 136 declaring views 160 examples 1043 fixed-length string 182 graphic host variable 164 graphic host variable array 171 host variable array, declaring 161 host variable, declaring 161 INCLUDE statement 161 including SQLCA 159 including SQLDA 160 indicator variable 182 indicator variable array 182 naming convention 161 precompiler option defaults 492 ROWID variable 168 ROWID variable array 173 SQLCODE 159 SQLSTATE 159 table locator 167
C/C++ application program (continued) variable declaration 179 varying-length string 182 with classes, preparing 517 C++ application program DB2 coprocessor 479 cache dynamic SQL effect of RELEASE(DEALLOCATE) 409 cache (dynamic SQL) statements 596 CACHESIZE option of BIND PLAN subcommand 504 REBIND subcommand 504 CAF (call attachment facility) application program examples 886 preparation 862 connecting to DB2 886 description 861 function descriptions 869 load module structure 864 parameters 869 programming language 862 register conventions 869 restrictions 861 return codes checking 888 CLOSE 877 CONNECT 871 DISCONNECT 878 OPEN 875 TRANSLATE 880 run environment 863 running an application program 863 calculated values groups with conditions 11 summarizing group values 11 call attachment facility (CAF) 861 CALL DSNALI statement 869, 881 CALL DSNRLI statement 903 CALL statement example 682 SQL procedure 660 cardinality of user-defined table function improving query performance 779 Cartesian join 819 CASE statement (SQL procedure) 660 catalog statistics influencing access paths 785 catalog table accessing 17 SYSIBM.LOCATIONS 451 SYSIBM.SYSCOLUMNS 18 SYSIBM.SYSTABAUTH 18 CCSID (coded character set identifier) controlling in COBOL programs 213 effect of DECLARE VARIABLE 85 host variable 85 precompiler option 485 SQLDA 618 character host variable assembler 149 C 162 COBOL 194 Fortran 224 PL/I 235 Index
X-3
character host variable array C 169 COBOL 200 PL/I 238 character large object (CLOB) 297 character string literals 78 mixed data 4 width of column in results 64, 70 check constraint check integrity 260 considerations 259 CURRENT RULES special register effect 260 defining 259 description 259 determining violations 989 enforcement 260 programming considerations 989 CHECK-pending status 260 checkpoint calls 434, 436 frequency 438 CHKP call, IMS 433, 435 CICS attachment facility controlling from applications 939 programming considerations 939 DSNTIAC subroutine assembler 158 C 186 COBOL 219 PL/I 249 environment planning 512 facilities command language translator 493 control areas 557 EDF (execution diagnostic facility) 563 language interface module (DSNCLI) use in link-editing an application 495 logical unit of work 432 operating indoubt data 433 running a program 557 system failure 433 preparing with JCL procedures 516 programming DFHEIENT macro 148 sample applications 1015, 1018 SYNCPOINT command 433 storage handling assembler 158 C 186 COBOL 219 PL/I 249 sync point 433 thread reuse 939 unit of work 432 claim effect of cursor WITH HOLD 421 CLOSE statement description 108 recommendation 113 WHENEVER NOT FOUND clause 612, 623 CLOSE (connection function of CAF) description 866 language examples 878
CLOSE (connection function of CAF) (continued) program example 886 syntax 877 syntax usage 877 cluster ratio effects table space scan 809 with list prefetch 832 COALESCE function 42 COBOL application program assignment rules 213 character host variable 194 fixed-length string 194 varying-length string 194 character host variable array 200 fixed-length string array 200 varying-length string array 201 CODEPAGE compiler option 213 coding SQL statements 186 compiling 494 controlling CCSID 213 data type compatibility 214 data types 210 DB2 coprocessor 480 DB2 precompiler option defaults 492 DCLGEN support 136 declaring tables 188 declaring views 188 defining the SQLDA 187 dynamic SQL 627 FILLER entry name 214 graphic host variable 195 graphic host variable array 203 host variable declaring 191 use of hyphens 191 host variable array declaring 191 INCLUDE statement 189 including SQLCA 186 indicator variable 216 indicator variable array 216 LOB variable 198 LOB variable array 204 naming convention 189 numeric host variable 192 numeric host variable array 199 object-oriented extensions 219 options 189, 190 preparation 494 record description from DCLGEN 136 resetting SQL-INIT-FLAG 191 result set locator 197 ROWID variable 198 ROWID variable array 205 sample program 1031 SQLCODE 186 SQLSTATE 186 table locator 198 variable declaration 212 WHENEVER statement 189 with classes, preparing 517 CODEPAGE compiler option 213 coding SQL statements assembler 143 C 158 C++ 158
X-4
coding SQL statements (continued) COBOL 186 dynamic 593 Fortran 220 PL/I 230 REXX 249 collection, package identifying 499 SET CURRENT PACKAGESET statement 499 colon assembler host variable 148 C host variable 162 C host variable array 162 COBOL host variable 192 Fortran host variable 223 PL/I host variable 234 PL/I host variable array 234 preceding a host variable 80 preceding a host variable array 86 column data types 4 default value system-defined 20 user-defined 20 displaying, list of 18 heading created by SPUFI 71 labels DCLGEN 133 labels, usage 621 name as a host variable 134 name, with UPDATE statement 36 retrieving, with SELECT 5 specified in CREATE TABLE 19 width of results 64, 70 COMMA precompiler option 485 commit point description 432 IMS unit of work 434 lock release 434 COMMIT statement description 61 ending unit of work 432 in a stored procedure 646 when to issue 432 commit, using RRSAF 895 common table expressions description 13 examples 1097 in a CREATE VIEW statement 14 in a SELECT statement 14 in an INSERT statement 15 infinite loops 16 recursion 1097 comparison compatibility rules 4 operator, subquery 51 compatibility data types 4 locks 406 rules 4 composite key 262 compound statement example dynamic SQL 668 nested IF and WHILE statements 667 EXIT handler 663
compound statement (continued) labels 662 SQL procedure 660 valid SQL statements 660 concurrency control by locks 394 description 393 effect of ISOLATION options 415, 416 lock size 404 uncommitted read 414 recommendations 397 CONNECT statement SPUFI 61 CONNECT (connection function of CAF) description 866 language examples 875 program example 886 syntax 871 CONNECT (Type 1) statement 453 CONNECT LOCATION field of SPUFI panel 61 CONNECT precompiler option 485 CONNECT statement, with DRDA access 450 connection DB2 connecting from tasks 857 function of CAF CLOSE 877, 886 CONNECT 871, 875, 886 description 864 DISCONNECT 878, 886 OPEN 875, 886 sample scenarios 882, 883 summary of behavior 881 TRANSLATE 880, 888 function of RRSAF AUTH SIGNON 914 CREATE THREAD 936 description 896 examples 932 IDENTIFY 904, 936 SIGNON 909, 936 summary of behavior 902 TERMINATE IDENTIFY 930, 936 TERMINATE THREAD 928, 936 TRANSLATE 931 constants, syntax C 180 Fortran 227 CONTINUE clause of WHENEVER statement 93 CONTINUE handler (SQL procedure) description 663 example 663 correlated reference correlation name 55 SQL rules 47 usage 47 using in subquery 54 correlated subqueries 766 correlation name 55 CREATE GLOBAL TEMPORARY TABLE statement CREATE TABLE statement DEFAULT clause 20 NOT NULL clause 20 PRIMARY KEY clause 262 relationship names 264
22
Index
X-5
CREATE TABLE statement (continued) UNIQUE clause 20, 262 usage 19 CREATE THREAD, RRSAF description 897 effect of call order 902 implicit connection 897 language examples 928 program example 936 syntax usage 926 CREATE TRIGGER activation order 287 description 277 example 277 timestamp 287 trigger naming 279 CREATE VIEW statement 26 created temporary table instances 22 table space scan 809 use of NOT NULL 22 working with 21 CS (cursor stability) optimistic concurrency control 412 page and row locking 412 CURRENDATA option of BIND plan and package options differ 420 CURRENT PACKAGESET special register dynamic plan switching 507 identify package collection 499 CURRENT RULES special register effect on check constraints 260 usage 505 CURRENT SERVER special register description 499 saving value in application program 468 CURRENT SQLID special register use in test 557 value in INSERT statement 20 cursor ambiguous 419 attributes using GET DIAGNOSTICS 116 using SQLCA 116 closing 108 CLOSE statement 113 deleting a current row 112 description 103 dynamic scrollable 115 effect of abend on position 123 example retrieving backward with scrollable cursor 125 updating specific row with rowset-positioned cursor 127 updating with non-scrollable cursor 124 updating with rowset-positioned cursor 126 insensitive scrollable 114 maintaining position 122 non-scrollable 113 open state 122 OPEN statement 105 result table 103 row-positioned declaring 103 deleting a current row 107 description 103 end-of-data condition 105
cursor (continued) row-positioned (continued) retrieving a row of data 106 steps in using 103 updating a current row 107 rowset-positioned declaring 108 description 103 end-of-data condition 109 number of rows 109 number of rows in rowset 113 opening 108 retrieving a rowset of data 109 steps in using 108 updating a current rowset 111 scrollable description 114 dynamic 115 fetch orientation 117 INSENSITIVE 114 retrieving rows 116 SENSITIVE DYNAMIC 115 SENSITIVE STATIC 114 sensitivity 114 static 115 updatable 114 static scrollable 115 types 113 WITH HOLD claims 421 description 122 locks 420
D
data adding to the end of a table 988 associated with WHERE clause 8 currency 467 effect of locks on integrity 394 improving access 789 indoubt state 435 not in a table 16 retrieval using SELECT * 987 retrieving a rowset 109 retrieving a set of rows 106 retrieving large volumes 987 scrolling backward through 983 security and integrity 431 understanding access 789 updating during retrieval 986 updating previously retrieved data 986 data encryption 266 data type built-in 4 comparisons 85 compatibility assembler and SQL 151 assembler application program 155 C and SQL 175 C application program 181 COBOL and SQL 210, 214 Fortran and SQL 225, 227 PL/I and SQL 241 PL/I application program 245 REXX and SQL 254 result set locator 710
X-6
DATE precompiler option 486 datetime data type 4 DB2 abend CAF 869 DL/I batch 576 RRSAF 901 DB2 coprocessor for C 478 for C++ 479 for COBOL 480 for PL/I 481 processing SQL statements 473 DB2 private protocol access coding an application 448 compared to DRDA access 442 mixed environment 1113 planning 442 sample program 1069 DB2I (DB2 Interactive) background processing run-time libraries 527 EDITJCL processing run-time libraries 527 help system 518 interrupting 68 menu 59 panels BIND PACKAGE 533 BIND PLAN 536 Compile, Link, and Run 553 Current SPUFI Defaults 62 DB2I Primary Option Menu 59, 519 DCLGEN 132, 139 Defaults for BIND PLAN 546 Precompile 528 Program Preparation 522 System Connection Types 550 preparing programs 518 program preparation example 522 selecting DCLGEN (declarations generator) 136 SPUFI 59 SPUFI 59 DBCS (double-byte character set) table names 131 translation in CICS 493 DBINFO stored procedure 636 user-defined function 330 DBPROTOCOL(DRDA) 456 DBRM (database request module) binding to a package 496 binding to a plan 498 deciding how to bind 384 description 477 DCLGEN subcommand of DSN building data declarations 131 COBOL example 138 column labels 133 DBCS table names 131 forming host variable names 134 identifying tables 131 INCLUDE statement 136 including declarations in a program 136 indicator variable array declaration 134 right margin 135 starting 131
DCLGEN subcommand of DSN (continued) using 131 using PDS (partitioned data set) 131 DDITV02 input data set 576 DDOTV02 output data set 578 deadlock description 395 example 395 indications in CICS 397 in IMS 397 in TSO 396 recommendation for avoiding 399 with RELEASE(DEALLOCATE) 400 X00C90088 reason code in SQLCA 396 debugging application programs 560 DEC15 precompiler option 486 rules 16 DEC31 avoiding overflow 17 precompiler option 486 rules 16 decimal 15 digit precision 16 31 digit precision 17 arithmetic 16 DECIMAL constants 180 data type, in C 179 function, in C 179 declaration generator (DCLGEN) 131 in an application program 136 variables in CAF program examples 891 DECLARE (SQL procedure) 661 DECLARE CURSOR statement description, row-positioned 103 description, rowset-positioned 108 FOR UPDATE clause 104 multilevel security 104 prepared statement 611, 615 scrollable cursor 114 WITH HOLD clause 122 WITH RETURN option 650 WITH ROWSET POSITIONING clause 108 DECLARE GLOBAL TEMPORARY TABLE statement DECLARE TABLE statement advantages of using 79 assembler 145 C 160 COBOL 188 description 79 Fortran 222 PL/I 231 table description 131 DECLARE VARIABLE statement changing CCSID 86 coding 86 description 85 declared temporary table including column defaults 24 including identity columns 23 instances 23 ON COMMIT clause 25 qualifier for 23 remote access using a three-part name 449 Index
23
X-7
declared temporary table (continued) requirements 23 working with 21 dedicated virtual memory pool 828 DEFER(PREPARE) 456 DELETE statement correlated subquery 56 description 37 positioned FOR ROW n OF ROWSET clause 112 restrictions 107 WHERE CURRENT clause 107, 112 subquery 51 deleting current rows 107 data 37 every row from a table 38 rows from a table 37 delimiter, SQL 78 department sample table creating 20 description 994 DESCRIBE CURSOR statement 710 DESCRIBE INPUT statement 610 DESCRIBE PROCEDURE statement 709 DESCRIBE statement column labels 621 INTO clauses 615, 617 DFHEIENT macro 148 DFSLI000 (IMS language interface module) 495 direct row access 270, 803 DISCONNECT (connection function of CAF) description 866 language examples 879 program example 886 syntax 878 syntax usage 878 displaying table columns 18 table privileges 18 DISTINCT clause of SELECT statement 7 unique values 7 distinct type assigning values 369 comparing types 368 description 367 example argument of user-defined function (UDF) 372 arguments of infix operator 372 casting constants 372 casting function arguments 372 casting host variables 372 LOB data type 372 function arguments 371 strong typing 368 UNION of 371 distributed data choosing an access method 442 coordinating updates 452 copying a remote table 467 DBPROTOCOL bind option 444, 449 encoding scheme of retrieved data 466 example accessing remote temporary table 449 calling stored procedure at remote location 444 connecting to remote server 444, 450
distributed data (continued) example (continued) limiting number of retrieved rows 464 specifying location in table name 444 using alias for multiple sites 451 using RELEASE statement 451 using three-part table names 448 executing long SQL statements 466 identifying server at run time 468 LOB performance setting CURRENT RULES special register 455 using LOB locators 455 using stored procedure result sets 455 maintaining data currency 467 moving from DB2 private protocol access to DRDA access 442 performance choosing bind options 456 coding efficient queries 454 forcing block fetch 458 limiting number of retrieved rows 461, 464 optimizing access path 457 specifying package list 456 using block fetch 458 using DRDA 458 performance considerations 456 planning access by a program 441 DB2 private protocol access 444 DRDA access 444 program preparation 447 programming coding with DB2 private protocol access 448 coding with DRDA access 448 resource limit facility 441 restricted systems 453 retrieving from ASCII or Unicode tables 466 savepoints 442 scrollable cursors 442 terminology 441 three-part table names 448 transmitting mixed data 467 two-phase commit 452 using alias for location 451 DL/I batch application programming 574 checkpoint call 433 checkpoint ID 582 commit and rollback coordination 438 DB2 requirements 574 DDITV02 input data set 576 DSNMTV01 module 579 features 573 SSM= parameter 579 submitting an application 579 TERM call 433 double-byte character large object (DBCLOB) 297 DPSI performance considerations 772 DRDA access accessing remote temporary table 449 bind options 445, 446 coding an application 448 compared to DB2 private protocol access 442 connecting to remote server 450 mixed environment 1113 planning 442, 444
X-8
DRDA access (continued) precompiler options 445 preparing programs 445 programming hints 465 releasing connections 451 sample program 1061 SQL limitations at different servers 465 DROP TABLE statement 25 DSN applications, running with CAF 863 DSN command of TSO return code processing 509 RUN subcommands 508 services lost under CAF 863 DSN_FUNCTION_TABLE table 361 DSN_STATEMENT_CACHE_TABLE 599 DSN_STATEMNT_TABLE table column descriptions 843 DSN8BC3 sample program 218 DSN8BD3 sample program 185 DSN8BE3 sample program 185 DSN8BF3 sample program 230 DSN8BP3 sample program 248 DSNACICS stored procedure debugging 1139 description 1131 invocation example 1137 invocation syntax 1132 output 1138 parameter descriptions 1133 restrictions 1139 DSNACICX user exit routine description 1135 parameter list 1136 rules for writing 1135 DSNAEXP stored procedure description 1143 example call 1145 option descriptions 1144 output 1146 syntax diagram 1144 DSNAIMS option descriptions 1140 syntax diagram 1139 DSNAIMS stored procedure description 1139 examples 1142 DSNALI (CAF language interface module) deleting 886 loading 886 DSNCLI (CICS language interface module) 495 DSNELI (TSO language interface module) 863 DSNH command of TSO 567 DSNHASM procedure 513 DSNHC procedure 513 DSNHCOB procedure 513 DSNHCOB2 procedure 513 DSNHCPP procedure 513 DSNHCPP2 procedure 513 DSNHDECP implicit CAF connection 866 implicit RRSAF connection 897 DSNHFOR procedure 513 DSNHICB2 procedure 513 DSNHICOB procedure 513 DSNHLI entry point to DSNALI implicit calls 866 program example 890
DSNHLI entry point to DSNRLI implicit calls 897 program example 935 DSNHLI2 entry point to DSNALI 888 DSNHPLI procedure 513 DSNMTV01 module 579 DSNRLI (RRSAF language interface module) deleting 935 loading 935 DSNTEDIT CLIST 1103 DSNTEP2 and DSNTEP4 sample program specifying SQL terminator 1026 DSNTEP2 sample program how to run 1019 parameters 1020 program preparation 1019 DSNTEP4 sample program how to run 1019 parameters 1020 program preparation 1019 DSNTIAC subroutine assembler 158 C 186 COBOL 219 PL/I 249 DSNTIAD sample program how to run 1019 parameters 1020 program preparation 1019 specifying SQL terminator 1024 DSNTIAR subroutine assembler 157 C 184 COBOL 217 description 98 Fortran 229 PL/I 247 return codes 100 using 100 DSNTIAUL sample program how to run 1019 parameters 1020 program preparation 1019 DSNTIR subroutine 229 DSNTPSMP stored procedure 672 DSNTRACE data set 884 DSNXDBRM 477 DSNXNBRM 477 duration of locks controlling 408 description 404 DXXMQGEN stored procedure description 1165 invocation example 1168 invocation syntax 1166 output 1169 parameter descriptions 1166 DXXMQGENCLOB stored procedure description 1173 invocation example 1175 invocation syntax 1173 output 1176 parameter descriptions 1174 DXXMQINSERT stored procedure description 1146 invocation example 1147 invocation syntax 1147 Index
X-9
DXXMQINSERT stored procedure (continued) output 1148 parameter descriptions 1147 DXXMQINSERTALL stored procedure description 1156 invocation example 1157 invocation syntax 1156 output 1158 parameter descriptions 1157 DXXMQINSERTALLCLOB stored procedure description 1163 invocation example 1165 invocation syntax 1164 output 1165 parameter descriptions 1164 DXXMQINSERTCLOB stored procedure description 1151 invocation example 1152 invocation syntax 1151 output 1153 parameter descriptions 1152 DXXMQRETRIEVE stored procedure description 1169 invocation example 1171 invocation syntax 1169 output 1173 parameter descriptions 1170 DXXMQRETRIEVECLOB stored procedure description 1176 invocation example 1179 invocation syntax 1177 output 1180 parameter descriptions 1177 DXXMQSHRED stored procedure description 1148 invocation example 1150 invocation syntax 1149 output 1151 parameter descriptions 1149 DXXMQSHREDALL stored procedure description 1158 invocation example 1160 invocation syntax 1159 output 1160 parameter descriptions 1159 DXXMQSHREDALLCLOB stored procedure description 1161 invocation example 1162 invocation syntax 1161 output 1163 parameter descriptions 1161 DXXMQSHREDCLOB stored procedure description 1153 invocation example 1155 invocation syntax 1154 output 1156 parameter descriptions 1154 DYNAM option of COBOL 189 dynamic plan selection restrictions with CURRENT PACKAGESET special register 507 using packages with 507 dynamic prefetch description 831 dynamic SQL advantages and disadvantages 593 assembler program 614
dynamic SQL (continued) C program 614 caching effect of RELEASE bind option 409 caching prepared statements 596 COBOL application program 189 COBOL program 627 description 593 effect of bind option REOPT(ALWAYS) 625 effect of WITH HOLD cursor 607 EXECUTE IMMEDIATE statement 604 fixed-list SELECT statements 610, 613 Fortran program 222 host languages 603 non-SELECT statements 603, 607 PL/I 614 PREPARE and EXECUTE 605, 607 programming 593 requirements 594 restrictions 594 sample C program 1043 statement caching 596 statements allowed 1113 using DESCRIBE INPUT 610 varying-list SELECT statements 613, 625 DYNAMICRULES bind option 502
E
ECB (event control block) address in CALL DSNALI parameter list 869 CONNECT connection function of CAF 871, 875 CONNECT, RRSAF 904 program example 886, 888 programming with CAF (call attachment facility) 886 EDIT panel, SPUFI empty 66 SQL statements 67 embedded semicolon embedded 1025 employee photo and resume sample table 999 employee sample table 996 employee-to-project-activity sample table 1002 ENCRYPT_TDES function 266 END-EXEC delimiter 78 end-of-data condition 105, 109 error arithmetic expression 93 division by zero 93 handling 93 messages generated by precompiler 567 overflow 93 return codes 91 run 566 ESTAE routine in CAF (call attachment facility) 884 exception condition handling 93 EXCLUSIVE lock mode effect on resources 405 LOB 427 page 405 row 405 table, partition, and table space 405 EXEC SQL delimiter 78 EXECUTE IMMEDIATE statement 604 EXECUTE statement dynamic execution 607
X-10
EXECUTE statement (continued) parameter types 624 USING DESCRIPTOR clause 625 EXISTS predicate, subquery 53 EXIT handler (SQL procedure) 663 exit routine abend recovery with CAF 884 attention processing with CAF 883 DSNACICX 1135 EXPLAIN automatic rebind 391 report of outer join 817 statement description 789 index scans 802 interpreting output 800 investigating SQL processing 789 EXPLAIN PROCESSING field of panel DSNTIPO overhead 799 EXPLAIN STATEMENT CACHE ALL 599
F
FETCH FIRST n ROWS ONLY clause effect on OPTIMIZE clause 775 FETCH FIRST n ROWS ONLY clause effect on distributed performance 464 FETCH statement description, multiple rows 109 description, single row 106 fetch orientation 117 host variables 612 multiple-row assembler 145 description 109 FOR n ROWS clause 113 number of rows in rowset 113 using with descriptor 109, 111 using with host variable arrays 109 row and rowset positioning 117 scrolling through data 983 USING DESCRIPTOR clause 623 using row-positioned cursor 106 filter factor predicate 746 fixed-length character string assembler 149 COBOL 200 FLAG precompiler option 486 FLOAT precompiler option 486 FOLD value for C and CPP 487 value of precompiler option HOST 487 FOR FETCH ONLY clause 458 FOR READ ONLY clause 458 FOR UPDATE clause 104 FOREIGN KEY clause description 264 usage 265 format SELECT statement results 70 SQL in input data set 66 Fortran application program @PROCESS statement 222 assignment rules 226 byte data type 223 character host variable 223, 224
Fortran application program (continued) coding SQL statements 220 constant syntax 227 data type compatibility 227 data types 225 declaring tables 222 declaring views 222 defining the SQLDA 221 host variable, declaring 223 INCLUDE statement 222 including SQLCA 220 indicator variable 228 LOB variable 224 naming convention 222 numeric host variable 223 parallel option 223 precompiler option defaults 492 result set locator 224 ROWID variable 224 SQLCODE 220 SQLSTATE 220 statement labels 222 variable declaration 226 WHENEVER statement 222 FROM clause joining tables 39 SELECT statement 5 FRR (functional recovery routine) 884 FULL OUTER JOIN clause 41 function column when evaluated 808 function resolution 356 functional recovery routine (FRR) 884
G
GET DIAGNOSTICS output host variable processing 89 GET DIAGNOSTICS statement condition items 94 connection items 94 data types for items 94, 95 description 94 multiple-row INSERT 94 RETURN_STATUS item 665 ROW_COUNT item 109 SQL procedure 660 statement items 94 using in handler 664 global transaction RRSAF support 911, 916, 920 glossary 1187 GO TO clause of WHENEVER statement GOTO statement (SQL procedure) 660 governor (resource limit facility) 601 GRANT statement 559 graphic host variable assembler 150 C 164 COBOL 195 PL/I 235 graphic host variable array C 171 COBOL 203 PL/I 238 GRAPHIC precompiler option 486
93
Index
X-11
GROUP BY clause effect on OPTIMIZE clause 777 use with aggregate functions 11
H
handler, using in SQL procedure 663 HAVING clause selecting groups subject to conditions 11 subquery 51 HOST FOLD value for C and CPP 487 precompiler option 487 host language declarations in DB2I (DB2 Interactive) 132 dynamic SQL 603 host structure C 173 COBOL 205 description 80, 90 PL/I 240 retrieving row of data 90 using SELECT INTO 90 host variable assembler 148 C 161, 162 changing CCSID 85 character assembler 149 C 162 COBOL 194 Fortran 224 PL/I 235 COBOL 191, 192 description 79 example query 759 FETCH statement 612 floating-point assembler 149 C/C++ 179 COBOL 192 PL/I 244 Fortran 223 graphic assembler 150 C 164 COBOL 195 PL/I 235 impact on access path selection 759 in equal predicate 763 inserting into tables 83 LOB assembler 301 C 301 COBOL 302 Fortran 303 PL/I 304 naming a structure C 173 COBOL 205 PL/I program 240 numeric assembler 149 C 162 COBOL 192 Fortran 223 PL/I 234
host variable (continued) PL/I 233, 234 PREPARE statement 611 REXX 254 selecting single row 81 static SQL flexibility 594 tuning queries 759 updating values in tables 82 using 80 using INSERT with VALUES clause 83 using SELECT INTO 81 using SELECT INTO with aggregate function using SELECT INTO with expressions 82 host variable array C 161, 168 character C 169 COBOL 200 PL/I 238 COBOL 191, 199 description 80, 86 graphic C 171 COBOL 203 PL/I 238 indicator variable array 87 inserting multiple rows 87 numeric C 168 COBOL 199 PL/I 237 PL/I 233, 237 retrieving multiple rows 87 hybrid join description 821
82
I
I/O processing parallel queries 849 IDENTIFY, RRSAF program example 936 syntax 904 usage 904 identity column defining 31, 271 IDENTITY_VAL_LOCAL function 272 inserting in table 983 inserting values into 30 trigger 281 using as parent key 272 IF statement (SQL procedure) 660 IKJEFT01 terminal monitor program in TSO 510 IMS checkpoint calls 434 CHKP call 433 commit point 434 environment planning 511 language interface module (DFSLI000) 495 link-editing 495 recovery 433, 435 restrictions on commit 435 ROLB call 433, 438 ROLL call 433, 438 SYNC call 433 unit of work 433, 434
X-12
IMS (continued) XRST call 435 IMS transactions stored procedure multiple connections 1143 option descriptions 1140 syntax diagram 1139 IN predicate, subquery 52 INCLUDE statement, DCLGEN output 136 index access methods access path selection 810 by nonmatching index 812 IN-list index scan 812 matching index columns 802 matching index description 811 multiple 812 one-fetch index scan 814 locking 407 types foreign key 264 primary 263 unique 263 unique on primary key 261 indicator structure 90 indicator variable array declaration in DCLGEN 134 assembler application program 156 C 182 COBOL 216 description 83 Fortran 228 incorrect test for null column value 84 inserting null value 84 null value 84 PL/I 246 REXX 257 specifying 84 testing 83 indicator variable array C 182 COBOL 216 description 87 inserting null values 88 PL/I 246 specifying 88 testing for null value 87 infinite loop 16 informational referential constraint automatic query rewrite 267 description 267 INLISTP 787 INNER JOIN clause 40 input data set DDITV02 576 INSERT processing, effect of MEMBER CLUSTER option of CREATE TABLESPACE 398 INSERT statement description 27 multiple rows 29 single row 28 subquery 51 VALUES clause 27 with identity column 30 with ROWID column 30 inserting values from host variable arrays 87 values from host variables 83 INTENT EXCLUSIVE lock mode 405, 427
INTENT SHARE lock mode 405, 427 Interactive System Productivity Facility (ISPF) internal resource lock manager (IRLM) 579 invalid SQL terminator characters 1024 IS DISTINCT FROM predicate 85 ISOLATION option of BIND PLAN subcommand effects on locks 412 isolation level control by SQL statement example 421 recommendations 400 REXX 258 ISPF (Interactive System Productivity Facility) browse 61, 69 DB2 uses dialog management 59 DB2I Primary Option Menu 519 precompiling under 518 Program Preparation panel 522 programming 857, 860 scroll command 71 ISPLINK SELECT services 859 ITERATE statement (SQL procedure) 660
59
J
JCL (job control language) batch backout example 581 DDNAME list format 514 page number format 515 precompilation procedures 512 precompiler option list format 514 preparing a CICS program 516 preparing a object-oriented program 517 starting a TSO batch application 510 join operation Cartesian 819 description 815 FULL OUTER JOIN 41 hybrid description 821 INNER JOIN 40 join sequence 823 joining a table to itself 41 joining tables 39 LEFT OUTER JOIN 42 merge scan 820 more than one join 45 more than one join type 45 nested loop 818 operand nested table expression 46 user-defined table function 46 RIGHT OUTER JOIN 43 SQL rules 44 star join 823 star schema 823 join sequence definition 738
K
KEEPDYNAMIC option BIND PACKAGE subcommand 600 BIND PLAN subcommand 600
Index
X-13
key composite 262 foreign 264 parent 261 primary choosing 261 defining 263 recommendations for defining using timestamp 261 unique 983 keywords, reserved 1109
263
L
label, column 621 language interface modules DSNALI 652 DSNCLI 495 DSNRLI 652 program preparation 381 large object (LOB) character conversion 309 declaring host variables 300 for precompiler 300 declaring LOB locators 300 defining and moving data into DB2 297 description 297 expression 306 indicator variable 308 locator 305 materialization 305 sample applications 300 LEAVE statement (SQL procedure) 660 LEFT OUTER JOIN clause 42 level of a lock 402 LEVEL precompiler option 487 limited partition scan 806 LINECOUNT precompiler option 487 link-editing 494 AMODE option 554 RMODE option 554 list prefetch description 831 thresholds 832 load module structure of CAF (call attachment facility) load module structure of RRSAF 898 LOAD MVS macro used by CAF 863 LOAD MVS macro used by RRSAF 895 LOB lock concurrency with UR readers 417 description 425 LOB (large object) lock duration 427 LOCK TABLE statement 428 locking 425 modes of LOB locks 427 modes of table space locks 427 LOB column, definition 297 LOB variable assembler 151 C 167 COBOL 198 Fortran 224 PL/I 236 LOB variable array C 172
864
LOB variable array (continued) COBOL 204 PL/I 239 lock avoidance 418 benefits 394 class transaction 393 compatibility 406 description 393 duration controlling 408 description 404 LOBs 427 effect of cursor WITH HOLD 420 effects deadlock 395 suspension 394 timeout 394 escalation when retrieving large numbers of rows 987 hierarchy description 402 LOB locks 425 mode 404 object description 407 indexes 407 options affecting access path 424 bind 408 cursor stability 412 program 408 read stability 415 repeatable read 416 uncommitted read 414 page locks CS, RS, and RR compared 416 description 402 recommendations for concurrency 397 size page 402 partition 402 table 402 table space 402 unit of work 431, 432 LOCK TABLE statement effect on auxiliary tables 428 effect on locks 422 LOCKPART clause of CREATE and ALTER TABLESPACE effect on locking 403 LOCKSIZE clause recommendations 398 LOOP statement (SQL procedure) 660
M
mapping macro assembler applications 158 DSNXDBRM 477 DSNXNBRM 477 MARGINS precompiler option 487 mass delete contends with UR process 417 materialization LOBs 305 outer join 817
X-14
materialization (continued) views and nested table expressions 837 MEMBER CLUSTER option of CREATE TABLESPACE merge processing views or nested table expressions 836 message analyzing 567 CAF errors 881 obtaining text assembler 157 C 184 COBOL 217 Fortran 229 PL/I 247 RRSAF errors 902 mixed data converting 467 description 4 transmitting to remote location 467 MLS (multilevel security) referential constraints 266 triggers 289 mode of a lock 404 modified source statements 477 MQSeries DB2 functions commit environment 948 connecting applications 962 MQPUBLISH 945 MQPUBLISHXML 947 MQREAD 945 MQREADALL 946 MQREADALLCLOB 946 MQREADALLXML 947 MQREADCLOB 945 MQREADXML 947 MQRECEIVE 945 MQRECEIVEALL 946 MQRECEIVEALLCLOB 946 MQRECEIVEALLXML 947 MQRECEIVECLOB 945 MQRECEIVEXML 947 MQSEND 945 MQSENDXML 947 MQSENDXMLFILE 947 MQSENDXMLFILECLOB 947 MQSUBSCRIBE 945 MQUNSUBSCRIBE 945 programming considerations 944 retrieving messages 961 sending messages 960 DB2 scalar functions 944 DB2 stored procedures DXXMQINSERT 947, 948 DXXMQINSERTALL 947, 948 DXXMQINSERTALLCLOB 947 DXXMQINSERTCLOB 947 DXXMQRETRIEVE 948 DXXMQRETRIEVECLOB 948 DXXMQSHRED 947 DXXMQSHREDALL 947 DXXMQSHREDALLCLOB 947 DXXMQSHREDCLOB 947 DB2 table functions 946 DB2 XML-specific functions 947 description 941
398
multilevel security (MLS) check referential constraints 266 triggers 289 multiple-mode IMS programs 436 multiple-row FETCH statement checking DB2_LAST_ROW 96 specifying indicator arrays 88 SQLCODE +100 92 testing for null 88 multiple-row INSERT statement dynamic execution 608 NOT ATOMIC CONTINUE ON SQLEXCEPTION using GET DIAGNOSTICS 94 MVS 31-bit addressing 554
94
N
naming convention assembler 145 C 161 COBOL 189 Fortran 222 PL/I 232 REXX 253 tables you create 20 NATIONAL data type 213 nested table expression correlated reference 46 correlation name 46 join operation 46 processing 836 NEWFUN enabling V8 new object 472 precompiler option 488 NODYNAM option of COBOL 190 NOFOR precompiler option 488 NOGRAPHIC precompiler option 488 noncorrelated subqueries 767 nonsegmented table space scan 810 nontabular data storage 989 NOOPTIONS precompiler option 488 NOPADNTSTR precompiler option 488 NOSOURCE precompiler option 488 NOT FOUND clause of WHENEVER statement notices, legal 1183 NOXREF precompiler option 488 NUL character in C 161 NUL-terminated string in C 180 NULL pointer in C 161 null value column value of UPDATE statement 37 host structure 90 indicator variable 84 indicator variable array 87 inserting into columns 84 IS DISTINCT FROM predicate 84 IS NULL predicate 84 Null, in REXX 253 numeric data width of column in results 70 numeric data description 4 width of column in results 64
93
Index
X-15
numeric host variable assembler 149 C 162 COBOL 192 Fortran 223 PL/I 234 numeric host variable array C 168 COBOL 199 PL/I 237
O
object of a lock 407 object-oriented program, preparation 517 ON clause, joining tables 39 ONEPASS precompiler option 488 OPEN statement opening a cursor 105 opening a rowset cursor 108 performance 835 prepared SELECT 612 USING DESCRIPTOR clause 625 without parameter markers 623 OPEN (connection function of CAF) description 866 language examples 876 program example 886 syntax 875 syntax usage 875 optimistic concurrency control 412 OPTIMIZE FOR n ROWS clause 776 interaction with FETCH FIRST clause 775 OPTIMIZE FOR n ROWS clause effect on distributed performance 461 OPTIONS precompiler option 489 ORDER BY clause derived columns 10 effect on OPTIMIZE clause 777 SELECT statement 10 with AS clause 10 organization application examples 1013 originating task 850 outer join EXPLAIN report 817 FULL OUTER JOIN 41 LEFT OUTER JOIN 42 materialization 817 RIGHT OUTER JOIN 43 output host variable processing 89 errors 89
P
package advantages 385 binding DBRM to a package 495 EXPLAIN option for remote 799 PLAN_TABLE 791 remote 496 to plans 498 deciding how to use 384 identifying at run time 498
package (continued) invalidated 389 dropping objects 387 listing 498 location 499 rebinding examples 388 rebinding with pattern-matching characters selecting 498, 499 trigger 389 version, identifying 502 PADNTSTR precompiler option 489 page locks description 402 PAGE_RANGE column of PLAN_TABLE 806 panel Current SPUFI Defaults 62, 65 DB2I Primary Option Menu 59 DCLGEN 132, 139 DSNEDP01 132, 139 DSNEPRI 59 DSNESP01 59 DSNESP02 62 DSNESP07 65 EDIT (for SPUFI input data set) 66 SPUFI 59 parallel processing description 847 enabling 850 related PLAN_TABLE columns 807 tuning 854 parameter marker casting in function invocation 363 dynamic SQL 606 more than one 607 values provided by OPEN 612 with arbitrary statements 624, 625 parent key 261 PARMS option 510 partition scan, limited 806 partitioned table space locking 403 performance affected by application structure 859 DEFER(PREPARE) 456 lock size 404 NODEFER(PREPARE) 456 remote queries 454, 456, 464 REOPT(ALWAYS) 457 REOPT(NONE) 457 REOPT(ONCE) 457 monitoring with EXPLAIN 789 performance considerations DPSI 772 scrollable cursor 771 PERIOD precompiler option 489 phone application, description 1013 PL/I application program character host variable 235 character host variable array 238 coding considerations 233 coding SQL statements 230 data type compatibility 245 data types 241 DB2 coprocessor 481
387
X-16
PL/I application program (continued) DBCS constants 232 DCLGEN support 136 declaring tables 231 declaring views 231 graphic host variable 235 graphic host variable array 238 host variable 233 host variable array 233 INCLUDE statement 232 including SQLCA 230 including SQLDA 231 indicator variable array 246 indicator variables 246 LOB variable 236 LOB variable array 239 naming convention 232 numeric host variable 234 numeric host variable array 237 result set locator 235 ROWID variable 236 ROWID variable array 239 SQLCODE 230 SQLSTATE 230 statement labels 232 table locator 236 variable declaration 244 WHENEVER statement 232 PLAN_TABLE table column descriptions 791 report of outer join 817 planning accessing distributed data 441 binding 384 precompiling 383 recovery 431 precompiler binding on another system 478 description 473 diagnostics 477 functions 474 input 476 maximum size of input 476 modified source statements 477 option descriptions 483 options CONNECT 445 defaults 491 DRDA access 445 SQL 445 output 477 planning for 383 precompiling programs 473 starting dynamically 514 JCL for procedures 512 submitting jobs DB2I panels 528 ISPF panels 522 submitting jobs with ISPF panels 520 using 474 predicate description 735 evaluation rules 739 filter factor 746 general rules 8 generation 755
predicate (continued) impact on access paths 735 indexable 737 join 736 local 736 modification 755 properties 735 stage 1 (sargable) 737 stage 2 evaluated 737 influencing creation 782 subquery 736 predictive governing in a distributed environment 602 with DEFER(PREPARE) 602 writing an application for 602 PRELINK utility 525 PREPARE statement dynamic execution 606 host variable 611 INTO clause 615 prepared SQL statement caching 600 statements allowed 1113 PRIMARY KEY clause ALTER TABLE statement 263 CREATE TABLE statement 262 PRIMARY_ACCESSTYPE column of PLAN_TABLE problem determination, guidelines 566 program preparation 471 program problems checklist documenting error situations 560 error messages 561 project activity sample table 1001 project application, description 1013 project sample table 1000
803
Q
query parallelism 847 QUOTE precompiler option 489 QUOTESQL precompiler option 489
R
reason code CAF translation 885, 888 X00C10824 878, 879 X00F30050 884 X00F30083 884 X00C90088 396 X00C9008E 395 X00D44057 573 REBIND PACKAGE subcommand of DSN generating list of 1103 options ISOLATION 412 RELEASE 408 rebinding with wildcard characters 387 remote 496 REBIND PLAN subcommand of DSN generating list of 1103 options ACQUIRE 408 ISOLATION 412 Index
X-17
REBIND PLAN subcommand of DSN (continued) options (continued) NOPKLIST 388 PKLIST 388 RELEASE 408 remote 496 REBIND TRIGGER PACKAGE subcommand of DSN 389 rebinding automatically conditions for 389 EXPLAIN processing 799 changes that require 386 list of plans and packages 389 lists of plans or packages 1103 options for 384 packages with pattern-matching characters 387 planning for 391 plans 388 plans or packages in use 384 Recoverable Resource Manager Services attachment facility (RRSAF) See RRSAF recovery identifying application requirements 437 IMS application program 433 IMS batch 439 planning for 431 recursive SQL controlling depth 1100 description 15 examples 1097 infinite loops 16 rules 15 single level explosion 1097 summarized explosion 1099 referential constraint defining 261 description 261 determining violations 989 informational 267 name 264 on tables with data encryption 266 on tables with multilevel security 266 referential integrity effect on subqueries 56 programming considerations 989 register conventions CAF (call attachment facility) 869 RRSAF 903 RELEASE option of BIND PLAN subcommand combining with other options 408 release information block (RIB) 869 RELEASE LOCKS field of panel DSNTIP4 effect on page and row locks 420 RELEASE SAVEPOINT statement 440 RELEASE statement, with DRDA access 451 reoptimizing access path 760 REPEAT statement (SQL procedure) 660 REPLACE statement (COBOL) 190 reserved keywords 1109 resetting control blocks CAF 878 RRSAF 930 RESIGNAL statement raising a condition 665 setting SQLSTATE value 667
RESIGNAL statement (SQL procedure) 661 resource limit facility (governor) description 601 writing an application for predictive governing resource unavailable condition CAF 880 RRSAF 931 restart, DL/I batch programs using JCL 581 result column join operation 39 naming with AS clause 7 result set locator assembler 150 C 166 COBOL 197 example 710 Fortran 224 how to use 710 PL/I 235 result table description 3 example 3 of SELECT statement 3 read-only 105 retrieving data in ASCII from DB2 UDB for z/OS 619 data in Unicode from DB2 UDB for z/OS 619 data using SELECT * 987 data, changing the CCSID 619 large volumes of data 987 multiple rows into host variable arrays 87 return code DSN command 509 SQL 878 RETURN statement returning SQL procedure status 665 RETURN statement (SQL procedure) 661 REXX procedure application programming interface CONNECT 250 DISCONNECT 251 EXECSQL 250 coding SQL statements 249 data type conversion 254 DSNREXX 251 error handling 253 indicator variable 257 input data type 254, 255 isolation level 258 naming convention 253 naming cursors 253 naming prepared statements 253 running 512 SQLCA 249 SQLDA 250 statement label 253 RIB (release information block) address in CALL DSNALI parameter list 869 CONNECT connection function of CAF 871 CONNECT, RRSAF 904 program example 886 RID (record identifier) pool use in list prefetch 831 RIGHT OUTER JOIN clause 43 RMODE link-edit option 554 ROLB call, IMS advantages over ROLL 439
602
X-18
ROLB call, IMS (continued) DL/I batch programs 438 ends unit of work 433 ROLL call, IMS DL/I batch programs 438 ends unit of work 433 ROLLBACK option CICS SYNCPOINT command 433 ROLLBACK statement description 61 error in IMS 573 in a stored procedure 646 TO SAVEPOINT clause 440 unit of work in TSO 432 with RRSAF 895 row selecting with WHERE clause 8 updating 36 updating current 107 updating large volumes 986 row-level security 266 ROWID coding example 805 data type 4 index-only access 802 inserting in table 983 ROWID column defining 30, 269 defining LOBs 297 inserting values into 30 using for direct row access 270 ROWID variable assembler 151 C 168 COBOL 198 Fortran 224 PL/I 236 ROWID variable array C 173 COBOL 205 PL/I 239 rowset deleting current 112 updating current 111 rowset cursor closing 113 DB2 for z/OS down-level requester 467 declaring 108 end-of-data condition 109 example 126 multiple-row FETCH 109 opening 108 using 108 rowset parameter, DB2 for z/OS support for RR (repeatable read) how locks are held (figure) 416 page and row locking 416 RRS global transaction RRSAF support 911, 916, 920 RRSAF application program examples 935 preparation 894 connecting to DB2 936 description 893 function descriptions 903 load module structure 898
RRSAF (continued) programming language 894 register conventions 903 restrictions 893 return codes AUTH SIGNON 914 CONNECT 904 SIGNON 909 TERMINATE IDENTIFY 930 TERMINATE THREAD 928 TRANSLATE 931 run environment 895 RRSAF (Recoverable Resource Manager Services attachment facility) transactions using global transactions 401 RS (read stability) page and row locking (figure) 415 RUN subcommand of DSN return code processing 509 running a program in TSO foreground 508 run-time libraries, DB2I background processing 527 EDITJCL processing 527 running application program CICS 512 errors 566 IMS 511
S
sample application call attachment facility 862 databases, for 1010 DB2 private protocol access 1069 DRDA access 1061 dynamic SQL 1043 environments 1015 languages 1015 LOB 1014 organization 1013 phone 1013 programs 1015 project 1013 RRSAF 894 static SQL 1043 stored procedure 1013 structure of 1009 use 1015 user-defined function 1014 sample program DSN8BC3 218 DSN8BD3 185 DSN8BE3 185 DSN8BF3 230 DSN8BP3 248 sample table DSN8810.ACT (activity) 993 DSN8810.DEMO_UNICODE (Unicode sample ) 1003 DSN8810.DEPT (department) 994 DSN8810.EMP (employee) 996 DSN8810.EMP_PHOTO_RESUME (employee photo and resume) 999 DSN8810.EMPPROJACT (employee-to-project activity) 1002 DSN8810.PROJ (project) 1000 PROJACT (project activity) 1001 Index
465
X-19
sample table (continued) views on 1004 savepoint description 439 distributed environment 442 RELEASE SAVEPOINT statement 440 restrictions on use 440 ROLLBACK TO SAVEPOINT 440 SAVEPOINT statement 440 setting multiple times 440 use with DRDA access 440 SAVEPOINT statement 440 scope of a lock 402 scrollable cursor comparison of types 118 DB2 UDB for z/OS down-level requester 467 distributed environment 442 dynamic dynamic model 115 fetching current row 119 fetch orientation 117 optimistic concurrency control 412 performance considerations 771 retrieving rows 116 sensitive dynamic 115 sensitive static 114 sensitivity 119 static creating delete hole 119 creating update hole 120 holes in result table 119 number of rows 117 removing holes 121 static model 115 updatable 114 scrolling backward through data 983 backward using identity columns 984 backward using ROWIDs 984 in any direction 985 ISPF (Interactive System Productivity Facility) 71 search condition comparison operators 9 NOT keyword 9 SELECT statement 49 WHERE clause 9 segmented table space locking 403 scan 810 SEGSIZE clause of CREATE TABLESPACE recommendations 810 SELECT FROM INSERT statement BEFORE trigger values 32 default values 31 description 31 inserting into view 33 multiple rows cursor sensitivity 34 effect of changes 34 effect of SAVEPOINT and ROLLBACK 35 effect of WITH HOLD 35 processing errors 35 result table of cursor 34 using cursor 33 using FETCH FIRST 33 using INPUT SEQUENCE 33 result table 32
SELECT FROM INSERT statement (continued) retrieving BEFORE trigger values 31 default values 31 generated values 31 multiple rows 31 special registers 31 using SELECT INTO 33 SELECT statement changing result format 70 clauses DISTINCT 7 FROM 5 GROUP BY 11 HAVING 11 ORDER BY 10 UNION 12 WHERE 8 derived column with AS clause 7 fixed-list 610, 613 named columns 6 parameter markers 624 search condition 49 selecting a set of rows 103 subqueries 49 unnamed columns 7 using with * (to select all columns) 5 column-name list 6 DECLARE CURSOR statement 103, 108 varying-list 613, 625 selecting all columns 5 more than one row 81 named columns 6 rows 8 some columns 6 unnamed columns 7 semicolon default SPUFI statement terminator 62 embedded 1025 sequence numbers COBOL application program 189 Fortran 222 PL/I 232 sequence object creating 273 referencing 274 using across multiple tables 274 sequences improving concurrency 401 sequential detection 832, 834 sequential prefetch bind time 831 description 830 SET clause of UPDATE statement 36 SET CURRENT DEGREE statement 850 SET CURRENT PACKAGESET statement 499 SET ENCRYPTION PASSWORD statement 266 setting SQL terminator DSNTIAD 1024 SPUFI 67 SHARE INTENT EXCLUSIVE lock mode 405, 427 lock mode LOB 427 page 405
X-20
SHARE (continued) lock mode (continued) row 405 table, partition, and table space 405 SIGNAL statement raising a condition 665 setting condition message text 666 SIGNAL statement (SQL procedure) 661 SIGNON, RRSAF program example 936 syntax 909 usage 909 simple table space locking 403 single-mode IMS programs 436 SOME quantified predicate 52 sort program RIDs (record identifiers) 835 when performed 835 removing duplicates 835 shown in PLAN_TABLE 834 sort key ORDER BY clause 10 ordering 10 SOURCE precompiler option 489 special register behavior in stored procedures 647 CURRENT PACKAGE PATH 500 CURRENT PACKAGESET 500 CURRENT RULES 505 user-defined functions 342 SPUFI browsing output 69 changed column widths 70 CONNECT LOCATION field 61 created column heading 71 DB2 governor 68 default values 62 entering comments 67 panels allocates RESULT data set 60 filling in 60 format and display output 69 previous values displayed on panel 59 selecting on DB2I menu 59 processing SQL statements 59, 68 retrieving Unicode data 67 setting SQL terminator 67 specifying SQL statement terminator 62 SQLCODE returned 70 SQL (Structured Query Language) checking execution 91 coding assembler 143 basics 77 C 158 C++ 158 COBOL 186 dynamic 627 Fortran 220 Fortran program 221 object extensions 295 PL/I 230 REXX 249 cursors 103
SQL (Structured Query Language) (continued) dynamic coding 593 sample C program 1043 statements allowed 1113 host variable arrays 79 host variables 79 keywords, reserved 1109 return codes checking 91 handling 98 statement terminator 1024 string delimiter 527 structures 79 syntax checking 465 varying-list 613, 625 SQL communication area (SQLCA) description 91 using DSNTIAR to format 98 SQL precompiler option 490 SQL procedure conditions, handling 663 forcing SQL error 667 preparation using DSNTPSMP procedure 670 program preparation 669 referencing SQLCODE and SQLSTATE 664 SQL variable 661 statements allowed 1118 SQL procedure statement CALL statement 660 CASE statement 660 compound statement 660 CONTINUE handler 663 EXIT handler 663 GET DIAGNOSTICS statement 660 GOTO statement 660 handler 663 handling errors 663 IF statement 660 ITERATE statement 660 LEAVE statement 660 LOOP statement 660 REPEAT statement 660 RESIGNAL statement 661 RETURN statement 661 SIGNAL statement 661 SQL statement 660 WHILE statement 660 SQL statement (SQL procedure) 660 SQL statement nesting restrictions 363 stored procedures 363 user-defined functions 363 SQL statement terminator modifying in DSNTEP2 and DSNTEP4 1026 modifying in DSNTIAD 1024 modifying in SPUFI 62 specifying in SPUFI 62 SQL statements ALLOCATE CURSOR 710 ALTER FUNCTION 314 ASSOCIATE LOCATORS 709 CLOSE 108, 113, 612 COBOL program sections 187 coding REXX 252 comments assembler 145 Index
X-21
SQL statements (continued) comments (continued) C 160 COBOL 188 Fortran 221 PL/I 231 REXX 252 CONNECT (Type 1) 453 CONNECT (Type 2) 453 CONNECT, with DRDA access 450 continuation assembler 145 C 160 COBOL 188 Fortran 221 PL/I 231 REXX 252 CREATE FUNCTION 314 DECLARE CURSOR description 103, 108 example 611, 615 DECLARE TABLE 79, 131 DELETE description 107 example 37 DESCRIBE 615 DESCRIBE CURSOR 710 DESCRIBE PROCEDURE 709 embedded 476 error return codes 98 EXECUTE 607 EXECUTE IMMEDIATE 604 EXPLAIN monitor access paths 789 FETCH description 106, 109 example 612 INSERT 27 labels assembler 146 C 161 COBOL 189 Fortran 222 PL/I 232 REXX 253 margins assembler 145 C 161 COBOL 189 Fortran 222 PL/I 232 REXX 253 OPEN description 105, 108 example 612 PREPARE 606 RELEASE, with DRDA access 451 SELECT description 8 joining a table to itself 41 joining tables 39 SELECT FROM INSERT 31 SET CURRENT DEGREE 850 set symbols 147 UPDATE description 107, 111, 112 example 36
SQL statements (continued) WHENEVER 93 SQL terminator, specifying in DSNTEP2 and DSNTEP4 SQL terminator, specifying in DSNTIAD 1024 SQL variable 661 SQL-INIT-FLAG, resetting 191 SQLCA (SQL communication area) assembler 143 C 158 checking SQLCODE 92 checking SQLERRD(3) 91 checking SQLSTATE 92 checking SQLWARN0 92 COBOL 186 description 91 DSNTIAC subroutine assembler 158 C 186 COBOL 219 PL/I 249 DSNTIAR subroutine assembler 157 C 184 COBOL 217 Fortran 229 PL/I 247 Fortran 220 PL/I 230 reason code for deadlock 396 reason code for timeout 395 REXX 249 sample C program 1043 SQLCODE -510 419 -923 577 -925 438, 573 -926 438, 573 +004 878, 879 +100 93 +256 884 +802 94 referencing in SQL procedure 664 values 92 SQLDA (SQL descriptor area) allocating storage 110, 616 assembler 144 assembler program 614 C 159, 614 COBOL 187 declaring 110 dynamic SELECT example 618 for LOBs and distinct types 621 Fortran 220 multiple-row FETCH statement 110 no occurrences of SQLVAR 615 OPEN statement 612 parameter in CAF TRANSLATE 880 parameter in RRSAF TRANSLATE 931 parameter markers 624 PL/I 230, 614 requires storage addresses 619 REXX 250 setting output fields 110 varying-list SELECT statement 614 SQLERROR clause of WHENEVER statement 93 SQLFLAG precompiler option 490 SQLN field of SQLDA 615
1026
X-22
SQLRULES, option of BIND PLAN subcommand 505 SQLSTATE 01519 94 2D521 438, 573 57015 577 referencing in SQL procedure 664 values 92 SQLVAR field of SQLDA 617 SQLWARNING clause of WHENEVER statement 93 SSID (subsystem identifier), specifying 526 SSN (subsystem name) CALL DSNALI parameter list 869 parameter in CAF CONNECT function 871 parameter in CAF OPEN function 875 parameter in RRSAF CONNECT function 904 SQL calls to CAF (call attachment facility) 866 SQL calls to RRSAF (recoverable resources services attachment facility) 897 star join 823 dedicated virtual memory pool 828 star schema defining indexes for 782 state of a lock 404 statement cache table 599 statement table column descriptions 843 static SQL description 593 host variables 594 sample C program 1043 STDDEV function when evaluation occurs 808 STDSQL precompiler option 490 STOP DATABASE command timeout 395 storage acquiring retrieved row 617 SQLDA 616 addresses in SQLDA 619 storage group, for sample application data 1010 stored procedure accessing transition tables 345, 714 binding 653 CALL statement 681 calling from a REXX procedure 714 defining parameter lists 687, 688, 689 defining to DB2 635 DSNACICS 1131 DSNAEXP 1143 DXXMQGEN 1165 DXXMQGENCLOB 1173 DXXMQINSERT 1146 DXXMQINSERTALL 1156 DXXMQINSERTALLCLOB 1163 DXXMQINSERTCLOB 1151 DXXMQRETRIEVE 1169 DXXMQRETRIEVECLOB 1176 DXXMQSHRED 1148 DXXMQSHREDALL 1158 DXXMQSHREDALLCLOB 1161 DXXMQSHREDCLOB 1153 example 630 IMS transsactions 1139 invoking from a trigger 285 languages supported 641
stored procedure (continued) linkage conventions 684 returning non-relational data 651 returning result set 650 running as authorized program 652 statements allowed 1116 testing 725 usage 629 use of special registers 647 using COMMIT in 646 using host variables with 633 using ROLLBACK in 646 using temporary tables in 651 WLM_REFRESH 1129 writing 641 writing in REXX 654 stormdrain effect 940 string data type 4 fixed-length assembler 149 COBOL 194 PL/I 246 host variables in C 180 varying-length assembler 149 COBOL 194 PL/I 246 subquery basic predicate 51 conceptual overview 49 correlated DELETE statement 56 description 53 example 53 tuning 766 UPDATE statement 55 DELETE statement 56 description 49 EXISTS predicate 53 IN predicate 52 join transformation 768 noncorrelated 767 quantified predicate 51 referential constraints 56 restrictions with DELETE 56 tuning 766 tuning examples 770 UPDATE statement 55 use with UPDATE, DELETE, and INSERT 51 subsystem identifier (SSID), specifying 526 subsystem name (SSN) 866, 897 summarizing group values 11 SYNC call, IMS 433, 434 SYNC parameter of CAF (call attachment facility) 877, 886 synchronization call abends 576 SYNCPOINT command of CICS 433 syntax diagram how to read xxii SYSLIB data sets 513 Sysplex query parallelism splitting large queries across DB2 members 847 SYSPRINT precompiler output options section 568 source statements section, example 569 summary section, example 570 Index
X-23
SYSPRINT precompiler output (continued) symbol cross-reference section 570 used to analyze errors 568 SYSTERM output to analyze errors 567
T
table altering changing definitions 21 using CREATE and ALTER 988 copying from remote locations 467 declaring 79, 131 deleting rows 37 dependent, cycle restrictions 265 displaying, list of 18 DROP statement 25 expression, nested processing 836 filling with test data 559 incomplete definition of 263 inserting multiple rows 29 inserting single row 28 loading, in referential structure 261 locks 402 populating 559 referential structure 261 retrieving 103 selecting values as you insert rows 31 temporary 21 updating rows 36 using three-part table names 448 table expressions, nested materialization 837 table locator assembler 150 C 167 COBOL 198 PL/I 236 table space for sample application 1010 locks description 402 scans access path 809 determined by EXPLAIN 790 task control block (TCB) See TCB (task control block) TCB (task control block) capabilities with CAF 862 capabilities with RRSAF 894 issuing CAF CLOSE 878 issuing CAF OPEN 876 temporary table advantages of 22 working with 21 TERM call in DL/I 433 terminal monitor program (TMP) See TMP (terminal monitor program) TERMINATE IDENTIFY, RRSAF program example 936 syntax 930 usage 930 TERMINATE THREAD, RRSAF program example 936 syntax 928 usage 928
terminating, CAF CLOSE function 877 TEST command of TSO 561 test environment, designing 557 test tables 557 test views of existing tables 557 thread CLOSE function 866 OPEN function 866 TIME precompiler option 490 timeout description 394 indications in IMS 395 X00C9008E reason code in SQLCA 395 TMP (terminal monitor program) DSN command processor 509 running under TSO 510 transaction IMS using global transactions 401 transaction lock description 393 transaction-oriented BMP, checkpoints in 436 transition table, trigger 282 transition variable, trigger 281 TRANSLATE (connection function of CAF) description 866 language example 881 program example 888 syntax usage 880 TRANSLATE function of RRSAF syntax 931 usage 931 translating requests into SQL 988 trigger activation order 287 activation time 279 cascading 286 coding 279 data integrity 290 delete 280 description 277 FOR EACH ROW 280 FOR EACH STATEMENT 280 granularity 280 insert 280 interaction with constraints 288 interaction with security label columns 289 naming 279 parts example 277 parts of 279 subject table 279 transition table 282 transition variable 281 triggering event 279 update 280 using identity columns 281 with row-level security 289 TSO CLISTs calling application programs 511 running in foreground 511 DSNALI language interface module 863 TEST command 561 tuning DB2 queries containing host variables 759 two-phase commit, definition 452
X-24
490
U
Unicode data, retrieving from DB2 UDB for z/OS 619 sample table 1003 UNION clause columns of result table 13 combining SELECT statements 12 effect on OPTIMIZE clause 777 eliminating duplicates 13 keeping duplicates with ALL 13 removing duplicates with sort 835 UNIQUE clause 262 unit of recovery indoubt recovering CICS 433 restarting IMS 435 unit of work CICS description 432 completion commit 432 open cursors 122 releasing locks 431 roll back 432 TSO 432 description 431 DL/I batch 438 duration 431 IMS batch 438 commit point 434 ending 433 starting point 433 prevention of data access by other users 431 TSO COMMIT statement 432 completion 432 ROLLBACK statement 432 updatable cursor 104 UPDATE lock mode page 405 row 405 table, partition, and table space 405 UPDATE statement correlated subqueries 55 description 36 positioned FOR ROW n OF ROWSET 112 restrictions 107 WHERE CURRENT clause 107, 111 SET clause 36 subquery 51 updating during retrieval 986 large volumes 986 values from host variables 82 UR (uncommitted read) concurrent access restrictions 417 effect on reading LOBs 426 page and row locking 414 recommendation 401 USE AND KEEP EXCLUSIVE LOCKS option of WITH clause 421 USE AND KEEP SHARE LOCKS option of WITH clause
421
USE AND KEEP UPDATE LOCKS option of WITH clause 421 USER special register value in INSERT statement 20 value in UPDATE statement 37 user-defined function statements allowed 1116 user-defined function (UDF) abnormal termination 363 accessing transition tables 345 ALTER FUNCTION statement 314 authorization ID 351 call type 329 casting arguments 362 characteristics 314 coding guidelines 318 concurrent 352 CREATE FUNCTION statement 314 data type promotion 359 DBINFO structure 331 definer 312 defining 314 description 311 diagnostic message 328 DSN_FUNCTION_TABLE 361 example external scalar 312, 316 external table 318 function resolution 359 overloading operator 317 sourced 317 SQL 317 function resolution 356 host data types assembler 323 C 323 COBOL 323 PL/I 323 implementer 312 implementing 318 indicators input 327 result 328 invoker 312 invoking 355 invoking from a trigger 285 invoking from predicate 365 main program 319 multiple programs 351 naming 328 nesting SQL statements 363 parallelism considerations 320 parameter conventions 321 assembler 334 C 334 COBOL 338 PL/I 341 preparing 350 reentrant 351 restrictions 319 samples 313 scratchpad 328, 344 scrollable cursor 366 setting result values 327 simplifying function resolution 360 special registers 342 specific name 328 Index
X-25
user-defined function (UDF) (continued) steps in creating and using 312 subprogram 319 syntax for invocation 355 table locators assembler 346 C 348 COBOL 348 PL/I 349 testing 352 types 311 user-defined table function improving query performance 779 USING DESCRIPTOR clause EXECUTE statement 625 FETCH statement 623 OPEN statement 625
volatile table
778
W
WHENEVER statement assembler 146 C 161 COBOL 189 CONTINUE clause 93 Fortran 222 GO TO clause 93 NOT FOUND clause 93, 106 PL/I 232 specifying 93 SQL error codes 93 SQLERROR clause 93 SQLWARNING clause 93 WHERE clause SELECT statement description 8 joining a table to itself 41 joining tables 39 subquery 51 WHILE statement (SQL procedure) 660 WITH clause common table expressions 13 specifies isolation level 421 WITH HOLD clause and CICS 123 and IMS 123 DECLARE CURSOR statement 122 restrictions 123 WITH HOLD cursor effect on dynamic SQL 607 effect on locks and claims 420 WLM_REFRESH stored procedure description 1129 option descriptions 1130 sample JCL 1131 syntax diagram 1130 write-down privilege 289
V
VALUES clause, INSERT statement 27 variable declaration assembler 154 C 179 COBOL 212 Fortran 226 PL/I 244 declaring in SQL procedure 661 host assembler 148 COBOL 192 Fortran 223 PL/I 234 variable array host C 168 COBOL 199 PL/I 237 VARIANCE function when evaluation occurs 808 varying-length character string assembler 149 COBOL 201 version of a package 502 VERSION precompiler option 491, 502 view contents 26 declaring 79 description 25 dropping 27 EXPLAIN 839, 840 identity columns 26 join of two or more tables 26 processing view materialization description 837 view materialization in PLAN_TABLE view merge 836 referencing special registers 26 retrieving 103 summary data 26 union of two or more tables 26 using deleting rows 37 inserting rows 27 updating rows 36 Visual Explain 775, 789
X
XREF precompiler option XRST call, IMS 435 491
806
X-26
Printed in USA
SC18-7415-05