SQL Programming
SQL Programming
RBAF-Y000-00
AS/400e IBM
RBAF-Y000-00
© Copyright International Business Machines Corporation 1997, 1999. All rights reserved.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is
subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
About DB2 UDB for AS/400 SQL Programming. . . . . . . . . . . . xvii
Who should read this book . . . . . . . . . . . . . . . . . . . . xvii
Assumptions Relating to Examples of SQL Statements . . . . . . . . . xvii
How to Interpret Syntax Diagrams in this Guide . . . . . . . . . . . xviii
AS/400 Operations Navigator . . . . . . . . . . . . . . . . . . . xix
Installing Operations Navigator. . . . . . . . . . . . . . . . . . xx
How this book has changed . . . . . . . . . . . . . . . . . . . . xxi
Prerequisite and related information . . . . . . . . . . . . . . . . . xxi
How to send your comments . . . . . . . . . . . . . . . . . . . xxi
Contents v
Returning a Completion Status to the Calling Program . . . . . . . . . . 134
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Example 1. ILE C and PL/I Procedures Called From ILE C Applications . . 136
Chapter 11. Common Concepts and Rules for Using SQL with Host
Languages . . . . . . . . . . . . . . . . . . . . . . . . . 215
Using Host Variables in SQL Statements . . . . . . . . . . . . . . . 215
Assignment Rules . . . . . . . . . . . . . . . . . . . . . . 216
Indicator Variables . . . . . . . . . . . . . . . . . . . . . . 219
Handling SQL Error Return Codes . . . . . . . . . . . . . . . . . 221
Handling Exception Conditions with the WHENEVER Statement . . . . . . 222
Contents vii
Names . . . . . . . . . . . . . . . . . . . . . . . . . . 254
COBOL Compile-Time Options. . . . . . . . . . . . . . . . . . 254
Statement Labels . . . . . . . . . . . . . . . . . . . . . . 255
WHENEVER Statement . . . . . . . . . . . . . . . . . . . . 255
Multiple source programs. . . . . . . . . . . . . . . . . . . . 255
Using Host Variables . . . . . . . . . . . . . . . . . . . . . . 255
Declaring Host Variables . . . . . . . . . . . . . . . . . . . . 255
Using Host Structures . . . . . . . . . . . . . . . . . . . . . . 264
Host Structure . . . . . . . . . . . . . . . . . . . . . . . . 265
Host Structure Indicator Array . . . . . . . . . . . . . . . . . . 268
Using Host Structure Arrays . . . . . . . . . . . . . . . . . . . 268
Host Structure Array . . . . . . . . . . . . . . . . . . . . . 269
Host Array Indicator Structure . . . . . . . . . . . . . . . . . . 272
Using External File Descriptions . . . . . . . . . . . . . . . . . . 272
Using External File Descriptions for Host Structure Arrays. . . . . . . . 273
Determining Equivalent SQL and COBOL Data Types . . . . . . . . . . 274
Notes on COBOL Variable Declaration and Usage . . . . . . . . . . 276
Using Indicator Variables . . . . . . . . . . . . . . . . . . . . . 276
Chapter 15. Coding SQL Statements in RPG for AS/400 Applications . . . 297
Defining the SQL Communications Area . . . . . . . . . . . . . . . 297
Defining SQL Descriptor Areas. . . . . . . . . . . . . . . . . . . 298
Embedding SQL Statements . . . . . . . . . . . . . . . . . . . 298
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Comments . . . . . . . . . . . . . . . . . . . . . . . . . 299
Continuation for SQL Statements . . . . . . . . . . . . . . . . . 299
Including Code . . . . . . . . . . . . . . . . . . . . . . . 299
Sequence Numbers . . . . . . . . . . . . . . . . . . . . . . 299
Names . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Statement Labels . . . . . . . . . . . . . . . . . . . . . . 300
WHENEVER Statement . . . . . . . . . . . . . . . . . . . . 300
Using Host Variables . . . . . . . . . . . . . . . . . . . . . . 300
Chapter 16. Coding SQL Statements in ILE RPG for AS/400 Applications . 309
Defining the SQL Communications Area . . . . . . . . . . . . . . . 309
Defining SQL Descriptor Areas. . . . . . . . . . . . . . . . . . . 310
Embedding SQL Statements . . . . . . . . . . . . . . . . . . . 311
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Comments . . . . . . . . . . . . . . . . . . . . . . . . . 311
Continuation for SQL Statements . . . . . . . . . . . . . . . . . 311
Including Code . . . . . . . . . . . . . . . . . . . . . . . 312
Sequence Numbers . . . . . . . . . . . . . . . . . . . . . . 312
Names . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Statement Labels . . . . . . . . . . . . . . . . . . . . . . 312
WHENEVER Statement . . . . . . . . . . . . . . . . . . . . 312
Using Host Variables . . . . . . . . . . . . . . . . . . . . . . 312
Declaring Host Variables . . . . . . . . . . . . . . . . . . . . 313
Using Host Structures . . . . . . . . . . . . . . . . . . . . . . 314
Using Host Structure Arrays . . . . . . . . . . . . . . . . . . . . 314
Declaring LOB Host Variables . . . . . . . . . . . . . . . . . . . 315
LOB Host Variables . . . . . . . . . . . . . . . . . . . . . . 315
LOB Locators . . . . . . . . . . . . . . . . . . . . . . . . 316
LOB File Reference Variables . . . . . . . . . . . . . . . . . . 316
Using External File Descriptions . . . . . . . . . . . . . . . . . . 316
External File Description Considerations for Host Structure Arrays. . . . . 317
Determining Equivalent SQL and RPG Data Types . . . . . . . . . . . 318
Notes on ILE/RPG 400 Variable Declaration and Usage . . . . . . . . 322
Using Indicator Variables . . . . . . . . . . . . . . . . . . . . . 322
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 322
SQLDA Example of the SQLDA for a Multiple Row-Area Fetch . . . . . . . 323
Contents ix
Avoiding REXX Conversion . . . . . . . . . . . . . . . . . . . 332
Using Indicator Variables . . . . . . . . . . . . . . . . . . . . . 332
Chapter 18. Preparing and Running a Program with SQL Statements . . . 333
Basic Processes of the SQL Precompiler . . . . . . . . . . . . . . . 333
Input to the Precompiler . . . . . . . . . . . . . . . . . . . . 334
Source File CCSIDs . . . . . . . . . . . . . . . . . . . . . 334
Output from the Precompiler . . . . . . . . . . . . . . . . . . 335
Non-ILE Precompiler Commands . . . . . . . . . . . . . . . . . . 340
Compiling a Non-ILE Application Program . . . . . . . . . . . . . 340
ILE Precompiler Commands. . . . . . . . . . . . . . . . . . . . 341
Compiling an ILE Application Program . . . . . . . . . . . . . . . 341
Precompiling for the VisualAge C++ for OS/400 Compiler . . . . . . . . 342
Interpreting Application Program Compile Errors . . . . . . . . . . . . 343
Error and Warning Messages during a Compile . . . . . . . . . . . 343
Binding an Application . . . . . . . . . . . . . . . . . . . . . . 344
Program References . . . . . . . . . . . . . . . . . . . . . 345
Displaying Precompiler Options . . . . . . . . . . . . . . . . . . 345
Running a Program with Embedded SQL . . . . . . . . . . . . . . . 346
OS/400 DDM Considerations . . . . . . . . . . . . . . . . . . 346
Override Considerations . . . . . . . . . . . . . . . . . . . . 346
SQL Return Codes . . . . . . . . . . . . . . . . . . . . . . 347
Chapter 23. Using the DB2 UDB for AS/400 Predictive Query Governor . . 391
Cancelling a Query . . . . . . . . . . . . . . . . . . . . . . . 392
General Implementation Considerations . . . . . . . . . . . . . . . 392
User Application Implementation Considerations . . . . . . . . . . . . 392
Controlling the Default Reply to the Inquiry Message . . . . . . . . . . 393
Using the Governor for Performance Testing. . . . . . . . . . . . . . 393
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer
Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
General Optimization Tips . . . . . . . . . . . . . . . . . . . . 395
Data Management Methods . . . . . . . . . . . . . . . . . . . . 396
Access Path . . . . . . . . . . . . . . . . . . . . . . . . 396
Access Method . . . . . . . . . . . . . . . . . . . . . . . 397
Data Access Methods . . . . . . . . . . . . . . . . . . . . . . 420
The Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . 422
Cost Estimation . . . . . . . . . . . . . . . . . . . . . . . 423
Access Plan Validation. . . . . . . . . . . . . . . . . . . . . 425
Optimizer Decision-Making Rules . . . . . . . . . . . . . . . . . 425
Join Optimization . . . . . . . . . . . . . . . . . . . . . . . 426
Grouping Optimization . . . . . . . . . . . . . . . . . . . . . 442
Effectively Using SQL Indexes . . . . . . . . . . . . . . . . . . . 446
Using Indexes With Sort Sequence . . . . . . . . . . . . . . . . . 449
Using Indexes and Sort Sequence With Selection, Joins, or Grouping . . . 449
Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Example Indexes . . . . . . . . . . . . . . . . . . . . . . . 450
Tips for using VARCHAR and VARGRAPHIC data types . . . . . . . . . 456
Contents xi
Improving Performance of SQL PREPARE Statements . . . . . . . . . . 470
Effects on Performance When Using Long Object Names . . . . . . . . . 470
Improving Performance Using the Precompile Options . . . . . . . . . . 471
Improving Performance by Using Structure Parameter Passing Techniques . . 472
Background Information on Parameter Passing. . . . . . . . . . . . 472
Some Differences Because of Structure Parameter Passing Techniques . . 473
Controlling Parallel Processing . . . . . . . . . . . . . . . . . . . 473
Controlling Parallel Processing System Wide . . . . . . . . . . . . 474
Controlling Parallel Processing for a Job . . . . . . . . . . . . . . 474
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements . . 605
Examples of programs that use SQL statements . . . . . . . . . . . . 605
SQL Statements in ILE C and C++ Programs . . . . . . . . . . . . . 606
SQL Statements in COBOL and ILE COBOL Programs. . . . . . . . . . 613
SQL Statements in PL/I . . . . . . . . . . . . . . . . . . . . . 621
SQL Statements in RPG for AS/400 Programs . . . . . . . . . . . . . 628
SQL Statements in ILE RPG for AS/400 Programs . . . . . . . . . . . 634
SQL Statements in REXX Programs. . . . . . . . . . . . . . . . . 640
Report Produced by Sample Programs. . . . . . . . . . . . . . . . 643
Contents xiii
CRTSQLPLI (Create Structured Query Language PL/I) Command. . . . . . 712
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 715
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 715
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 728
CRTSQLRPG (Create Structured Query Language RPG) Command . . . . . 728
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 731
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 732
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 744
CRTSQLRPGI (Create SQL ILE RPG Object) Command . . . . . . . . . 744
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 748
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 748
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 761
CRTSQLPKG (Create Structured Query Language Package) Command . . . 762
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 763
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 763
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 765
CVTSQLCPP (Convert Structured Query Language C++ Source) Command . . 766
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 769
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 769
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 781
DLTSQLPKG (Delete Structured Query Language Package) Command. . . . 781
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 782
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 782
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 783
PRTSQLINF (Print Structured Query Language Information) Command . . . . 783
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 783
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 783
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 784
RUNSQLSTM (Run Structured Query Language Statement) Command . . . . 784
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 786
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 786
Parameters for SQL procedures . . . . . . . . . . . . . . . . . 792
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 794
STRSQL (Start Structured Query Language) Command . . . . . . . . . 794
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 796
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 796
Example . . . . . . . . . . . . . . . . . . . . . . . . . . 801
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . 845
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
Contents xv
xvi DB2 UDB for AS/400 SQL Programming V4R4
About DB2 UDB for AS/400 SQL Programming
This book explains to programmers and database administrators:
v How to use the DB2 SQL for AS/400 licensed program
v How to access data in a database
v How to prepare, run, test, and optimize an application program containing SQL
statements.
For more information on DB2 UDB for AS/400 SQL guidelines and examples for
implementation in an application programming environment, see the following
books:
v DB2 UDB for AS/400 SQL Reference
v DB2 UDB for AS/400 SQL Call Level Interface
Note: The DB2 UDB for AS/400 library will only be available in the AS/400e
Information Center beginning in Version 4, Release 4.
v DATABASE 2/400 Advanced Database Functions, GG24-4249.
Because this guide is for the application programmer, most of the examples are
shown as if they were written in an application program. However, many examples
can be slightly changed and run interactively by using interactive SQL. The syntax
of an SQL statement, when using interactive SQL, differs slightly from the format of
the same statement when it is embedded in a program.
ÊÊ required_item ÊÍ
ÊÊ required_item ÊÍ
optional_item
If an optional item appears above the main path, that item has no effect on the
execution of the statement and is used only for readability.
optional_item
ÊÊ required_item ÊÍ
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
ÊÊ required_item required_choice1 ÊÍ
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
If one of the items is the default, it will appear above the main path and the
remaining choices will be shown below.
default_choice
ÊÊ required_item ÊÍ
optional_choice
optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
ÊÊ required_item · repeatable_item ÊÍ
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
ÊÊ required_item · repeatable_item ÊÍ
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
v Keywords appear in uppercase (for example, FROM). They must be spelled exactly
as shown. Variables appear in all lowercase letters (for example, column-name).
They represent user-supplied names or values.
v If punctuation marks, parentheses, arithmetic operators, or other such symbols
are shown, you must enter them as part of the syntax.
This new interface has been designed to make you more productive and is the only
user interface to new, advanced features of OS/400. Therefore, IBM recommends
that you use AS/400 Operations Navigator, which has online help to guide you.
While this interface is being developed, you may still need to use a traditional
emulator such as PC5250 to do some of your tasks.
To select the subcomponents that you want to install, select the Custom installation
option. (After Operations Navigator has been installed, you can add subcomponents
by using Client Access Selective Setup.)
1. Display the list of currently installed subcomponents in the Component
Selection window of Custom installation or Selective Setup.
2. Select AS/400 Operations Navigator.
3. Select any additional subcomponents that you want to install and continue with
Custom installation or Selective Setup.
After you install Client Access, double-click the AS400 Operations Navigator icon
on your desktop to access Operations Navigator and create an AS/400 connection.
SQL consists of statements and clauses that describe what you want to do with the
data in a database and under what conditions you want to do it.
SQL can access data in a remote relational database, using the IBM Distributed
Relational Database Architecture* (DRDA*). This function is described in
Chapter 28. Distributed Relational Database Function, of this guide. Further
information about DRDA is contained in the Distributed Database Programming
book.
SQL Concepts
DB2 UDB for AS/400 SQL consists of the following main parts:
v SQL run-time support
SQL run-time parses SQL statements and runs any SQL statements. This
support is that part of the Operating System/400* (OS/400) licensed program
which allows applications that contain SQL statements to be run on systems
where the DB2 UDB Query Manager and SQL Development Kit licensed program
is not installed.
v SQL precompilers
SQL precompilers support precompiling embedded SQL statements in host
languages. The following languages are supported:
– ILE C for AS/400*
– ILE C++ for AS/400
– VisualAge C++ for AS/400
– ILE COBOL for AS/400*
– COBOL for AS/400*
– AS/400 PL/I*
– RPG III (part of RPG for AS/400*)
– ILE RPG for AS/400*
The SQL host language precompilers prepare an application program containing
SQL statements. The host language compilers then compile the precompiled host
source programs. For more information on precompiling, see Chapter 18.
Preparing and Running a Program with SQL Statements. The precompiler
support is part of the DB2 UDB Query Manager and SQL Development Kit
licensed program.
v SQL interactive interface
SQL interactive interface allows you to create and run SQL statements. For more
information on interactive SQL, see Chapter 19. Using Interactive SQL.
Interactive SQL is part of the DB2 UDB Query Manager and SQL Development
Kit licensed program.
v Run SQL Statements CL command
SQL Terminology
There are two naming conventions that can be used in DB2 UDB for AS/400
programming: system (*SYS) and SQL (*SQL). The naming convention used affects
the method for qualifying file and table names and the terms used on the interactive
SQL displays. The naming convention used is selected by a parameter on the SQL
commands or, for REXX, selected through the SET OPTION statement.
System naming (*SYS): In the system naming convention, files are qualified by
library name in the form:
library/file
If the table name is not explicitly qualified and a default collection name is specified
for the default relational database collection (DFTRDBCOL) parameter of the
CRTSQLxxx 1 or the CRTSQLPKG commands, the default collection name is used.
If the table name is not explicitly qualified and the default collection name is not
specified, the qualification rules are:
v The following CREATE statements resolve to unqualified objects as follows:
– CREATE TABLE – The table is created in the current library (*CURLIB).
– CREATE VIEW – The view is created in the first library referenced in the
subselect.
– CREATE INDEX – The index is created into the collection or library that
contains the table on which the index is being built.
1. The xxx in this command refers to the host language indicators: CI for the ILE C for AS/400 language, CPPI for the ILE C++ for
AS/400 language, CBL for the COBOL for AS/400 language, CBLI for the ILE COBOL for AS/400 language, PLI for the AS/400
PL/I language, RPG for the RPG for AS/400 language, and RPGI for the ILE RPG for AS/400 language. The CVTSQLCPP
command is considered part of this group of commands even though it does not start with CRT.
SQL naming (*SQL): In the SQL naming convention, tables are qualified by the
collection name in the form:
collection.table
If the table name is not explicitly qualified and the default collection name is
specified in the default relational database collection (DFTRDBCOL) parameter of
the CRTSQLxxx command, the default collection name is used. If the table name is
not explicitly qualified and the default collection name is not specified, the rules are:
v For static SQL, the default qualifier is the user profile of the program owner.
v For dynamic SQL or interactive SQL, the default qualifier is the user profile of the
job running the statement.
SQL Objects
SQL objects used on the AS/400 system are collections, tables, aliases, views, SQL
packages, indexes, and catalogs. SQL creates and maintains these objects as
AS/400 database objects. A brief description of these objects follows.
Data Dictionary
A collection contains a data dictionary if it was created prior to Version 3 Release 1
or if the WITH DATA DICTIONARY clause was specified on the CREATE
COLLECTION or the CREATE SCHEMA statements. A data dictionary is a set of
tables containing object definitions. If SQL created the dictionary, then it is
automatically maintained by the system. You can work with data dictionaries by
using the interactive data definition utility (IDDU), which is part of the OS/400
program. For more information on IDDU, see the IDDU Use book.
Catalogs
An SQL catalog consists of a set of tables and views which describe tables, views,
indexes, packages, procedures, files, and constraints. This information is contained
in a set of cross-reference tables in libraries QSYS and QSYS2. Library QSYS2
also contains a set of catalog views built over the QSYS catalog tables which
describe information about all the tables, views, indexes, packages, procedures,
files, and constraints on the system. In each SQL collection there is a set of views
built over the catalog tables which contains information about the tables, views,
indexes, packages, files, and constraints in the collection.
A catalog is automatically created when you create a collection. You cannot drop or
explicitly change the catalog.
For more information about SQL catalogs, see the DB2 UDB for AS/400 SQL
Reference book.
Data in a table can be distributed across AS/400 systems. For more information
about distributed tables, see the DB2 Multisystem for AS/400 book.
Columns
Aliases
An alias is an alternate name for a table or view. You can use an alias to refer to a
table or view in those cases where an existing table or view can be referred to. For
more information on aliases, see the DB2 UDB for AS/400 SQL Reference book.
Views
A view appears like a table to an application program; however, a view contains no
data. It is created over one or more tables. A view can contain all the columns of
given tables or some subset of them, and can contain all the rows of given tables or
some subset of them. The columns can be arranged differently in a view than they
are in the tables from which they are taken. A view in SQL is a special form of a
nonkeyed logical file.
The following figure shows a view created from the preceding example of an SQL
table. Notice that the view is created only over the PROJNO and PROJNAME
columns of the table and for rows MA2110 and MA2100.
Columns
PROJNO PROJNAME
The index is used by the system for faster data retrieval. Creating an index is
optional. You can create any number of indexes. You can create or drop an index at
any time. The index is automatically maintained by the system. However, because
the indexes are maintained by the system, a large number of indexes can adversely
affect the performance of applications that change the table.
Constraints
Constraints are rules enforced by the database manager. DB2 UDB for AS/400
supports the following constraints:
v Unique constraints
A unique constraint is the rule that the values of the key are valid only if they
are unique. Unique constraints can be created using the CREATE TABLE and
ALTER TABLE statements. 2
Unique constraints are enforced during the execution of INSERT and UPDATE
statements. A PRIMARY KEY constraint is a form of UNIQUE constraint. The
difference is that a PRIMARY KEY cannot contain any nullable columns.
v Referential constraints
A referential constraint is the rule that the values of the foreign key are valid
only if:
– They appear as values of a parent key, or
– Some component of the foreign key is null.
Referential constraints are enforced during the execution of INSERT, UPDATE,
and DELETE statements.
v Check constraints
A check constraint is a rule that limits the values allowed in a column or group
of columns. Check constraints can be added using the CREATE TABLE and
ALTER TABLE statements. Check constraints are enforced during the execution
of INSERT and UPDATE statements. To satisfy the constraint, each row of data
inserted or updated in the table must make the specified condition either TRUE
or unknown (due to a null value).
Triggers
A trigger is a set of actions that are executed automatically whenever a specified
event occurs to a specified base table. An event can be an insert, update, or delete
operation. The trigger can be run either before or after the event. For more
information on triggers, see Chapter 6. Data Integrity in this book or see the DB2
UDB for AS/400 Database Programming book.
2. Although CREATE INDEX can create a unique index that also guarantees uniqueness, such an index is not a constraint.
| User-defined functions
| A user-defined function is a program that can be invoked like any built-in function.
| DB2 UDB for AS/400 supports external functions, SQL functions, and sourced
| functions. External functions can be any AS/400 ILE program or service program.
| An SQL function is defined entirely in SQL and can contain SQL statements,
| including SQL control statements. A sourced function is built over any built-in or any
| existing user-defined function. For more information on user-defined functions, see
| “Chapter 9. Writing User-Defined Functions (UDFs)” on page 185.
Packages
An SQL package is an object that contains the control structure produced when the
SQL statements in an application program are bound to a remote relational
database management system (DBMS). The DBMS uses the control structure to
process SQL statements encountered while running the application program.
SQL packages are created when a relational database name (RDB parameter) is
specified on a Create SQL (CRTSQLxxx) command and a program object is
created. Packages can also be created using the CRTSQLPKG command. For
more information about packages and distributed relational database function, see
Chapter 28. Distributed Relational Database Function
SQL packages can also be created using the QSQPRCED API. For more
information on QSQPRCED, see the System API Reference book.
With DB2 UDB for AS/400 you may need to manage the following objects:
v The original source
v Optionally, the module object for ILE programs
v The program or service program
v The SQL package for distributed programs
With a nondistributed non-ILE DB2 UDB for AS/400 program, you must manage
only the original source and the resulting program. The following shows the objects
User Temporary
Source Precompile Source Compile
Program
File File
Member Member
Processed Access
SQL Plan
Statements
RV2W565-1
With a nondistributed ILE DB2 UDB for AS/400 program, you may need to manage
the original source, the modules, and the resulting program or service program. The
following shows the objects involved and steps that happen during the precompile
and compile processes for a nondistributed ILE DB2 UDB for AS/400 program when
OBJTYPE(*PGM) is specified on the precompile command:
User Temporary
Source Precompile Source Compile Module Bind Program
File File
Member Member
RV2W569-0
With a distributed non-ILE DB2 UDB for AS/400 program, you must manage the
original source, the resulting program, and the resulting package. The following
shows the objects and steps that occur during the precompile and compile
processes for a distributed non-ILE DB2 UDB for AS/400 program:
Create
User Temporary
Source Precompile Source Compile SQL
Program SQL
File File Package
Member Package
Member
Processed Access
SQL Plan Access
Statements Plan
RV2W566-2
With a distributed ILE DB2 UDB for AS/400 program, you must manage the original
source, module objects, the resulting program or service program, and the resulting
packages. An SQL package can be created for each distributed module in a
distributed ILE program or service program. The following shows the objects and
steps that occur during the precompile and compile processes for a distributed ILE
DB2 UDB for AS/400 program:
RV2W570-1
Note: The access plans associated with the DB2 UDB for AS/400 distributed
program object are not created until the program is run locally.
Program
A program is the object which you can run that is created as a result of the compile
process for non-ILE compiles or as a result of the bind process for ILE compiles.
An access plan is a set of internal structures and information that tells SQL how to
run an embedded SQL statement most effectively. It is created only when the
program has successfully created. Access plans are not created during program
creation for SQL statements if the statements:
v Refer to a table or view that cannot be found
v Refer to a table or view to which you are not authorized
The access plans for such statements are created when the program is run. If, at
that time, the table or view still cannot be found or you are still not authorized, a
negative SQLCODE is returned. Access plans are stored and maintained in the
program object for nondistributed SQL programs and in the SQL package for
distributed SQL programs.
When a distributed SQL program is created, the name of the SQL package and an
internal consistency token are saved in the program. These are used at run time to
find the SQL package and to verify that the SQL package is correct for this
program. Because the name of the SQL package is critical for running distributed
SQL programs, an SQL package cannot be:
v Moved
v Renamed
v Duplicated
v Restored to a different library
Module
A module is an Integrated Language Environment (ILE) object that is created by
compiling source code using the CRTxxxMOD command (or any of the CRTBNDxxx
commands where xxx is C, CBL, CPP, or RPG). You can run a module only if you
use the Create Program (CRTPGM) command to bind it into a program. You usually
bind several modules together, but you can bind a module by itself. Modules
contain information about the SQL statements; however, the SQL access plans are
not created until the modules are bound into either a program or service program.
Service Program
A service program is an Integrated Language Environment (ILE) object that
provides a means of packaging externally supported callable routines (functions or
procedures) into a separate object. Bound programs and other service programs
can access these routines by resolving their imports to the exports provided by a
service program. The connections to these services are made when the calling
programs are created. This improves call performance to these routines without
including the code in the calling program.
The syntax for each of the SQL statements used in this chapter is described in
detail in the DB2 UDB for AS/400 SQL Reference book. A description of how to use
SQL statements and clauses in more complex situations is provided in Chapter 3.
Basic Concepts and Techniques and Chapter 5. Advanced Coding Techniques.
| In this chapter, the examples use the interactive SQL interface to show the
| execution of SQL statements. Each SQL interface provides methods for using SQL
| statements to define tables, views, and other objects, methods for updating the
| objects, and methods for reading data from the objects. Some tasks described here
| can also be done using Operations Navigator. In those cases, the information about
| the task notes that it can be done using Operations Navigator.
and press Enter. When the Enter SQL Statements display appears, you are ready
to start typing SQL Statements. For more information on interactive SQL and the
STRSQL command, see Chapter 19. Using Interactive SQL.
If you are reusing an existing interactive SQL session, make sure that you set the
naming mode to SQL naming. You can specify this on the F13 (Services) panel,
option 1 (Change session attributes).
| Note: Running this statement causes several objects to be created and takes
| several seconds.
| After you have successfully created a collection, you can create tables, views, and
| indexes in it. Tables, views, and indexes can also be created in libraries instead of
| collections.
For an example of creating a table using interactive SQL, see “Example: Creating a
Table (INVENTORY_LIST)”.
| When creating a table, you need to understand the concepts of null value and
| default value. A null value indicates the absence of a column value for a row. It is
| not the same as a value of zero or all blanks. It means ″unknown″. It is not equal to
| any value, not even to other null values. If a column does not allow the null value, a
| value must be assigned to the column, either a default value or a user supplied
| value.
| On the Enter SQL Statements display, type CREATE TABLE and press F4 (Prompt).
| The following display is shown (with the input areas not yet filled in):
| Type the table name and collection name of the table you are creating,
| INVENTORY_LIST in SAMPLECOLL, for the Table and Collection prompts. Each
| column you want to define for the table is represented by an entry in the list on the
| lower part of the display. For each column, type the name of the column, the data
| type of the column, its length and scale, and the null attribute.
| Press F11 to see more attributes that can be specified for the columns. This is
| where a default value may be specified.
| Specify CREATE TABLE Statement
|
| Type information, press Enter.
|
| Table . . . . . . . . . INVENTORY_LIST______ Name
| Collection . . . . . . SAMPLECOLL__ Name, F4 for list
|
| Data: 1=BIT, 2=SBCS, 3=MIXED, 4=CCSID
|
| Column Data Allocate
CCSID CONSTRAINT Default
| ITEM NUMBER_______ _ __________ N __________________
| ITEM NAME_________ _ __________ N '***UNKNOWN***'___
| UNIT_COST_________ _ __________ N __________________
| QUANTITY_ON_HAND__ _ __________ N NULL______________
| LAST_ORDER_DATE___ _ __________ N __________________
| ORDER_QUANTITY____ _ __________ N 20________________
| __________________ _ __________ _ __________________
| Bottom
| Table CONSTRAINT . . . . . . . . . . . . . N Y=Yes, N=No
| Distributed Table . . . . . . . . . . . . N Y=Yes, N=No
|
| F3=Exit F4=Prompt F5=Refresh F6=Insert line F10=Copy line
| F11=Display more attributes F12=Cancel F14=Delete line F24=More keys
|
| Note: Another way of entering column definitions is to press F4 (Prompt) with your
| cursor on one of the column entries in the list. This will bring up a display
| that shows all of the attributes for defining a single column.
| When all the values have been entered, press Enter to create the table. The Enter
| SQL Statements display will be shown again with a message indicating that the
| table has been created.
To change the labels for our columns, type LABEL ON COLUMN on the Enter SQL
Statements display and press F4 (Prompt). The following display will appear:
Type in the name of the table and collection containing the columns for which you
want to add labels and press Enter. The following display will be shown, prompting
you for each of the columns in the table.
Specify LABEL ON Statement
Column Heading
Column ....+....1....+....2....+....3....+....4....+....5....
ITEM_NUMBER 'ITEM NUMBER'___________________________
ITEM_NAME 'ITEM NAME'_____________________________
UNIT_COST 'UNIT COST'_____________________________
QUANTITY_ON_HAND 'QUANTITY ON HAND'_________
LAST_ORDER_DATE 'LAST ORDER DATE'_________
ORDER_QUANTITY 'NUMBER ORDERED'__________________________
Bottom
F3=Exit F5=Refresh F6=Insert line F10=Copy line F12=Cancel
F14=Delete line F19=Display system column names F24=More keys
Type the column headings for each of the columns. Column headings are defined in
20 character sections. Each section will be displayed on a different line when
showing the output of a SELECT statement. The ruler across the top of the column
heading entry area can be used to easily space the headings correctly. When the
headings are typed, press Enter.
The following message indicates that the LABEL ON statement was successful.
LABEL ON for INVEN00001 in SAMPLECOLL completed.
The table name in the message is the system table name for this table, not the
name that was actually specified in the statement. DB2 UDB for AS/400 maintains
The LABEL ON statement can also be keyed in directly on the Enter SQL
statements display as follows:
LABEL ON SAMPLECOLL/INVENTORY_LIST
(ITEM_NUMBER IS 'ITEM NUMBER',
ITEM_NAME IS 'ITEM NAME',
UNIT_COST IS 'UNIT COST',
QUANTITY_ON_HAND IS 'QUANTITY ON HAND',
LAST_ORDER_DATE IS 'LAST ORDER DATE',
ORDER_QUANTITY IS 'NUMBER ORDERED')
| For an example of inserting data into a table using interactive SQL, see “Example:
| Inserting Information into a Table (INVENTORY_LIST)”.
Type the table name and collection name in the input fields as shown. Change the
Select columns to insert INTO prompt to Yes. Press Enter to see the display where
the columns you want to insert values into can be selected.
Bottom
F3=Exit F5=Refresh F12=Cancel F19=Display system column names
F20=Display entire name F21=Display statement
In this example, we only want to insert into four of the columns. We will let the other
| columns have their default value inserted. The sequence numbers on this display
| indicate the order that the columns and values will be listed in the INSERT
| statement. Press Enter to show the display where values for the selected columns
| can be typed.
| Specify INSERT Statement
|
| Type values to insert, press Enter.
|
| Column Value
| ITEM_NUMBER '153047'_____________________________________________
| ITEM_NAME 'Pencils, red'_______________________________________
| UNIT_COST 10.00________________________________________________
| QUANTITY_ON_HAND 25___________________________________________________
|
|
|
|
|
|
|
|
|
|
|
| Bottom
| F3=Exit F5=Refresh F6=Insert line F10=Copy line F11=Display type
| F12=Cancel F14=Delete line F15=Split line F24=More keys
|
| Note: To see the data type and length for each of the columns in the insert list,
| press F11 (Display type). This will show a different view of the insert values
| display, providing information about the column definition.
| Type the values to be inserted for all of the columns and press Enter. A row
| containing these values will be added to the table. The values for the columns that
| were not specified will have a default value inserted. For LAST_ORDER_DATE it
| will be the null value since no default was provided and the column allows the null
| value. For ORDER_QUANTITY it will be 20, the value specified as the default value
| on the CREATE TABLE statement.
| You can type the INSERT statement on the Enter SQL Statements display as:
| INSERT INTO SAMPLECOLL.INVENTORY_LIST
| (ITEM_NUMBER,
| ITEM_NAME,
| UNIT_COST,
| QUANTITY_ON_HAND)
| To add the next row to the table, press F9 (Retrieve) on the Enter SQL Statements
| display. This will copy the previous INSERT statement to the typing area. You can
| either type over the values from the previous INSERT statement or press F4
| (Prompt) to use the Interactive SQL displays to enter data.
| Continue using the INSERT statement to add the following rows to the table. Values
| not shown in the chart below should not be inserted so that the default will be used.
| In the INSERT statement column list, specify only the column names for which you
| want to insert a value. For example, to insert the third row, you would specify only
| ITEM_NUMBER and UNIT_COST for the column names and only the two values
| for these columns in the VALUES list.
| The sample collection now contains two tables with several rows of data in each.
In addition to the three main clauses, there are several other clauses described in
“Using Basic SQL Statements and Clauses” on page 31, and in the DB2 UDB for
AS/400 SQL Reference book, which affect the final form of returned data.
To see the values we inserted into the INVENTORY_LIST table, type SELECT and
press F4 (prompt). The following display will be shown:
Specify SELECT Statement
Bottom
Type choices, press Enter.
Type the table name in the FROM tables field on the display. To select all columns
from the table, type * for the SELECT columns field on the display. Press Enter and
the statement will run to select all of the data for all of the columns in the table. The
following output will be shown:
Display Data
Data width . . . . . . : 71
Position to line . . . . . Shift to column . . . . . .
....+....1....+....2....+....3....+....4....+....5....+....6....+....7.
ITEM ITEM UNIT QUANTITY LAST NUMBER
NUMBER NAME COST ON ORDER ORDERED
HAND DATE
153047 Pencils, red 10.00 25 - 20
229740 Lined tablets 1.50 120 - 20
544931 ***UNKNOWN*** 5.00 - - 20
303476 Paper clips 2.00 100 - 20
559343 Envelopes, legal 3.00 500 - 20
291124 Envelopes, standard .00 - - 20
775298 Chairs, secretary 225.00 6 - 20
073956 Pens, black 20.00 25 - 20
******** End of data ********
The column headings that were defined using the LABEL ON statement are shown.
The ITEM_NAME for the third entry has the default value that was specified in the
CREATE TABLE statement. The QUANTITY_ON_HAND column has a null value for
This statement could be entered on the Enter SQL Statements display as:
SELECT *
FROM SAMPLECOLL.INVENTORY_LIST
To limit the number of columns returned by the SELECT statement, the columns
you want to see must be specified. To restrict the number of output rows returned,
the WHERE clause is used. To see only the items that cost more than 10 dollars,
and only have the values for the columns ITEM_NUMBER, UNIT_COST, and
ITEM_NAME returned, type SELECT and press F4 (Prompt). The Specify SELECT
Statement display will be shown.
Specify SELECT Statement
Bottom
Type choices, press Enter.
Although only one line is initially shown for each prompt on the Specify SELECT
Statement display, F6 (Insert line) can be used to add more lines to any of the input
areas in the top part of the display. This could be used if more columns were to be
entered in the SELECT columns list, or a longer, more complex WHERE condition
were needed.
Fill in the display as shown above. When Enter is pressed, the SELECT statement
is run. The following output will be seen:
Display Data
Data width . . . . . . : 41
Position to line . . . . . Shift to column . . . . . .
....+....1....+....2....+....3....+....4.
ITEM UNIT ITEM
NUMBER COST NAME
775298 225.00 Chairs, secretary
073956 20.00 Pens, black
******** End of data ********
The only rows returned are those whose data values compare with the condition
specified in the WHERE clause. Furthermore, the only data values returned are
This statement could have been entered on the Enter SQL Statements display as:
SELECT ITEM_NUMBER, UNIT_COST, ITEM_NAME
FROM SAMPLECOLL.INVENTORY_LIST
WHERE UNIT_COST > 10.00
Suppose you want to see a list of all the suppliers and the item numbers and item
names for their supplied items. The item name is not in the SUPPLIERS table. It is
in the INVENTORY_LIST table. Using the common column, ITEM_NUMBER, we
can see all three of the columns as if they were from a single table.
Whenever the same column name exists in two or more tables being joined, the
column name must be qualified by the table name to specify which column is really
being referenced. In this SELECT statement, the column name ITEM_NUMBER is
defined in both tables so the column name needs to be qualified by the table name.
If the columns had different names, there would be no confusion so qualification
would not be needed.
To perform this join, the following SELECT statement can be used. Enter it by
typing it directly on the Enter SQL Statements display or by prompting. If using
prompting, both table names need to be typed on the FROM tables input line.
SELECT SUPPLIER_NUMBER, SAMPLECOLL.INVENTORY_LIST.ITEM_NUMBER, ITEM_NAME
FROM SAMPLECOLL.SUPPLIERS, SAMPLECOLL.INVENTORY_LIST
WHERE SAMPLECOLL.SUPPLIERS.ITEM_NUMBER
= SAMPLECOLL.INVENTORY_LIST.ITEM_NUMBER
Another way to enter the same statement is to use a correlation name. A correlation
name provides another name for a table name to use in a statement. A correlation
name must be used when the table names are the same. It can be specified
following each table name in the FROM list. The previous statement could be
rewritten as:
SELECT SUPPLIER_NUMBER, Y.ITEM_NUMBER, ITEM_NAME
FROM SAMPLECOLL.SUPPLIERS X, SAMPLECOLL.INVENTORY_LIST Y
WHERE X.ITEM_NUMBER = Y.ITEM_NUMBER
For more information on correlation names, see the DB2 UDB for AS/400 SQL
Reference book.
The data values in the result table represent a composite of the data values
contained in the two tables INVENTORY_LIST and SUPPLIERS. This result table
contains the supplier number from the SUPPLIER table and the item number and
item name from the INVENTORY_LIST table. Any item numbers that do not appear
in the SUPPLIER table are not shown in this result table. The results are not
guaranteed to be in any order unless the ORDER BY clause is specified for the
SELECT statement. Since we did not change any column headings for the
SUPPLIER table, the SUPPLIER_NUMBER column name is used as the column
heading.
| If you want to limit the number of rows being changed during a single statement
| execution, use the WHERE clause with the UPDATE statement. For more
| information see, “The UPDATE Statement” on page 33. If you do not specify the
| WHERE clause, all of the rows in the specified table are changed. However, if you
| use the WHERE clause, the system changes only the rows satisfying the conditions
| that you specify. For more information, see “The WHERE Clause” on page 38.
| After typing the table name and collection name, press Enter. The display will be
| shown again with the list of columns in the table.
| Specify UPDATE Statement
|
| Type choices, press Enter.
|
| Table . . . . . . . . INVENTORY_LIST______ Name, F4 for list
| Collection . . . . . SAMPLECOLL__ Name, F4 for list
|
| Correlation . . . . . ____________________ Name
|
|
| Type information, press Enter.
|
| Column Value
| ITEM_NUMBER _____________________________________________________
| ITEM_NAME _____________________________________________________
| UNIT_COST _____________________________________________________
| QUANTITY_ON_HAND _____________________________________________________
| LAST_ORDER_DATE CURRENT DATE_________________________________________
| ORDER_QUANTITY 50___________________________________________________
|
| Bottom
| F3=Exit F4=Prompt F5=Refresh F6=Insert line F10=Copy line
| F11=Display type F12=Cancel F14=Delete line F24=More keys
|
| Specifying CURRENT DATE for a value will change the date in all the selected
| rows to be today’s date.
| After typing the values to be updated for the table, press Enter to see the display
| on which the WHERE condition can be specified. If a WHERE condition is not
| specified, all the rows in the table will be updated using the values from the
| previous display.
| After typing the condition, press Enter to perform the update on the table. A
| message will indicate that the function is complete.
| This statement could have been typed on the Enter SQL Statements display as:
| UPDATE SAMPLECOLL.INVENTORY_LIST
| SET LAST_ORDER_DATE = CURRENT DATE,
| ORDER_QUANTITY = 50
| WHERE ITEM_NUMBER = '303476'
| Running a SELECT statement to get all the rows from the table (SELECT * FROM
SAMPLECOLL.INVENTORY_LIST), returns the following result:
Display Data
Data width . . . . . . : 71
Position to line . . . . . Shift to column . . . . . .
....+....1....+....2....+....3....+....4....+....5....+....6....+....7.
ITEM ITEM UNIT QUANTITY LAST NUMBER
NUMBER NAME COST ON ORDER ORDERED
HAND DATE
153047 Pencils, red 10.00 25 - 20
229740 Lined tablets 1.50 120 - 20
544931 ***UNKNOWN*** 5.00 - - 20
303476 Paper clips 2.00 100 05/30/94 50
559343 Envelopes, legal 3.00 500 - 20
291124 Envelopes, standard .00 - - 20
775298 Chairs, secretary 225.00 6 - 20
073956 Pens, black 20.00 25 - 20
******** End of data ********
Bottom
F3=Exit F12=Cancel F19=Left F20=Right F21=Split
Only the entry for Paper clips was changed. The LAST_ORDER_DATE was
changed to be the current date. This date is always the date the update is run. The
NUMBER_ORDERED shows its updated value.
To check a column for the null value, the IS NULL comparison is used. Running
another SELECT statement after the delete has completed will return the following
result table:
Display Data
Data width . . . . . . : 71
Position to line . . . . . Shift to column . . . . . .
....+....1....+....2....+....3....+....4....+....5....+....6....+....7.
ITEM ITEM UNIT QUANTITY LAST NUMBER
NUMBER NAME COST ON ORDER ORDERED
HAND DATE
153047 Pencils, red 10.00 25 - 20
229740 Lined tablets 1.50 120 - 20
303476 Paper clips 2.00 100 05/30/94 50
559343 Envelopes, legal 3.00 500 - 20
775298 Chairs, secretary 225.00 6 - 20
073956 Pens, black 20.00 25 - 20
******** End of data ********
Bottom
F3=Exit F12=Cancel F19=Left F20=Right F21=Split
You can create a view using Operations Navigator. Or you can use the SQL
CREATE VIEW statement. Using the CREATE VIEW statement, defining a view on
a table is like creating a new table containing just the columns and rows you want.
When your application uses a view, it cannot access rows or columns of the table
that are not included in the view. However, rows that do not match the selection
criteria may still be inserted through a view if the SQL WITH CHECK OPTION is not
used. See Chapter 6. Data Integrity for more information on using WITH CHECK
OPTION.
| In order to create a view you must have the proper authority to the tables or
physical files on which the view is based. See the CREATE VIEW statement in the
SQL Reference for a list of authorities needed.
If you do not specify column names in the view definition, the column names will be
the same as those for the table on which the view is based.
You can make changes to a table through a view even if the view has a different
number of columns or rows than the table. For INSERT, columns in the table that
are not in the view must have a default value.
You can use the view as though it were a table, even though the view is totally
dependent on one or more tables for data. The view has no data of its own and
therefore requires no storage for the data. Because a view is derived from a table
that exists in storage, when you update the view data, you are really updating data
in the table. Therefore, views are automatically kept up-to-date as the tables they
depend on are updated.
In the example above, the columns in the view have the same name as the
columns in the table because no column list follows the view name. The collection
that the view is created into does not need to be the same collection as the table it
is built over. Any collection or library could be used. The following display is the
result of running the SQL statement:
SELECT * FROM SAMPLECOLL.RECENT_ORDERS
The only row selected by the view is the row that we updated to have the current
date. All other dates in our table still have the null value so they are not returned.
Example: Creating a view combining data from more than one table
You can create a view that combines data from two or more tables by naming more
than one table in the FROM clause. In the following example, the
INVENTORY_LIST table contains a column of item numbers called
ITEM_NUMBER, and a column with the cost of the item, UNIT_COST. These are
joined with the ITEM_NUMBER column and the SUPPLIER_COST column of the
SUPPLIERS table. A WHERE clause is used to limit the number of rows returned.
The view will only contain those item numbers for suppliers that can supply an item
at lower cost than the current unit cost.
Display Data
Data width . . . . . . : 51
Position to line . . . . . Shift to column . . . . . .
....+....1....+....2....+....3....+....4....+....5.
SUPPLIER_NUMBER ITEM UNIT SUPPLIER_COST
NUMBER COST
9988 153047 10.00 8.00
2424 153047 10.00 9.00
1234 229740 1.50 1.00
3366 303476 2.00 1.50
3366 073956 20.00 17.00
******** End of data ********
Bottom
F3=Exit F12=Cancel F19=Left F20=Right F21=Split
The rows that can be seen through this view are only those rows that have a
supplier cost that is less than the unit cost.
You can write SQL statements on one line or on many lines. The rules for the
continuation of lines are the same as those of the host language (the language the
program is written in).
Notes:
1. The SQL statements described in this section can be run on SQL tables and
views, and database physical and logical files. The tables, views, and files can
be either in an SQL collection or in a library.
2. Character strings specified in an SQL statement (such as those used with
WHERE or VALUES clauses) are case sensitive; that is, uppercase characters
must be entered in uppercase and lowercase characters must be entered in
lowercase.
WHERE ADMRDEPT='a00' (does not return a result)
Note: Because views are built on tables and actually contain no data, working with
views can be confusing. See “Creating and Using Views” on page 95 for
more information on inserting data by using a view.
The INTO clause names the columns for which you specify values. The VALUES
clause specifies a value for each column named in the INTO clause.
You must provide a value in the VALUES clause for each column named in an
INSERT statement’s column list. The column name list can be omitted if all columns
in the table have a value provided in the VALUES clause. If a column has a default
value, the keyword DEFAULT may be used as a value on the VALUES clause.
It is a good idea to name all columns into which you are inserting values because:
v Your INSERT statement is more descriptive.
v You can verify that you are giving the values in the proper order based on the
column names.
v You have better data independence. The order in which the columns are defined
in the table does not affect your INSERT statement.
If the column is defined to allow null values or to have a default, you do not need to
name it in the column name list or specify a value for it. The default value is used.
If the column is defined to have a default value, the default value is placed in the
column. If DEFAULT was specified for the column definition without an explicit
default value, SQL places the default value for that data type in the column. If the
column does not have a default value defined for it, but is defined to allow the null
value (NOT NULL was not specified in the column definition), SQL places the null
value in the column.
v For numeric columns, the default value is 0.
v For fixed length character or graphic columns, the default is blanks.
| v For varying length character or graphic columns or LOB columns, the default is a
| zero length string.
v For date, time, and timestamp columns, the default value is the current date,
time, or timestamp. When inserting a block of records, the default date/time value
is extracted from the system when the block is written. This means that the
column will be assigned the same default value for each row in the block.
| v For DataLink columns, the default value corresponds to DLVALUE(’’,’URL’,’’).
| v For distinct-type columns, the default value is the default value of the
| corresponding source type.
When your program attempts to insert a row that duplicates another row already in
the table, an error might occur. Multiple null values may or may not be considered
duplicate values, depending on the option used when the index was created.
v If the table has a primary key, unique key, or unique index, the row is not
inserted. Instead, SQL returns an SQLCODE of −803.
v If the table does not have a primary key, unique key, or unique index, the row
can be inserted without error.
If SQL finds an error while running the INSERT statement, it stops inserting data. If
you specify COMMIT(*ALL), COMMIT(*CS), COMMIT(*CHG), or COMMIT(*RR), no
rows are inserted. Rows already inserted by this statement, in the case of INSERT
A table created by SQL is created with the Reuse Deleted Records parameter of
*YES. This allows the database manager to reuse any rows in the table that were
marked as deleted. The CHGPF command can be used to change the attribute to
*NO. This causes INSERT to always add rows to the end of the table.
The order in which rows are inserted does not guarantee the order in which they
will be retrieved.
If the row is inserted without error, the SQLERRD(3) field of the SQLCA has a value
of 1.
Note: For blocked INSERT or for INSERT with select-statement, more than one
row can be inserted. The number of rows inserted is reflected in
SQLERRD(3).
For example, suppose an employee was relocated. To update several items of the
employee’s data in the CORPDATA.EMPLOYEE table to reflect the move, you can
specify:
UPDATE CORPDATA.EMPLOYEE
SET JOB = :PGM-CODE,
PHONENO = :PGM-PHONE
WHERE EMPNO = :PGM-SERIAL
Use the SET clause to specify a new value for each column you want to update.
The SET clause names the columns you want updated and provides the values you
want them changed to. The value you specify can be:
A column name. Replace the column’s current value with the contents of
another column in the same row.
A constant. Replace the column’s current value with the value provided in the
SET clause.
A null value. Replace the column’s current value with the null value, using the
keyword NULL. The column must be defined as capable of containing a null
value when the table was created, or an error occurs.
A host variable. Replace the column’s current value with the contents of a host
variable.
A special register. Replace the column’s current value with a special register
value; for example, USER.
An expression. Replace the column’s current value with the value that results
from an expression. The expression can contain any of the values in this list.
You can omit the WHERE clause. If you do, SQL updates each row in the table or
view with the values you supply.
If the database manager finds an error while running your UPDATE statement, it
stops updating and returns a negative SQLCODE. If you specify COMMIT(*ALL),
COMMIT(*CS), COMMIT(*CHG), or COMMIT(*RR), no rows in the table are
changed (rows already changed by this statement, if any, are restored to their
previous values). If COMMIT(*NONE) is specified, any rows already changed are
not restored to previous values.
If the database manager cannot find any rows that satisfy the search condition, an
SQLCODE of +100 is returned.
Note: UPDATE with a WHERE clause may have updated more than one row. The
number of rows updated is reflected in SQLERRD(3).
For example, suppose department D11 was moved to another place. You want to
delete each row in the CORPDATA.EMPLOYEE table with a WORKDEPT value of
D11 as follows:
DELETE FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
The WHERE clause tells SQL which rows you want to delete from the table. SQL
deletes all the rows that satisfy the search condition from the base table. You can
If SQL finds an error while running your DELETE statement, it stops deleting data
and returns a negative SQLCODE. If you specify COMMIT(*ALL), COMMIT(*CS),
COMMIT(*CHG), or COMMIT(*RR), no rows in the table are deleted (rows already
deleted by this statement, if any, are restored to their previous values). If
COMMIT(*NONE) is specified, any rows already deleted are not restored to their
previous values.
If SQL cannot find any rows that satisfy the search condition, an SQLCODE of +100
is returned.
Note: DELETE with WHERE clause may have deleted more than one row. The
number of rows deleted is reflected in SQLERRD(3).
The format and syntax shown here are very basic. SELECT INTO statements can
be more varied than the examples presented in this chapter. A SELECT INTO
statement can include the following:
1. The name of each column you want
2. The name of each host variable used to contain retrieved data
3. The name of the table or view that contains the data
4. A search condition to uniquely identify the row that contains the information you
want
5. The name of each column used to group your data
6. A search condition that uniquely identifies a group that contains the information
you want
7. The order of the results so a specific row among duplicates can be returned.
The SELECT, INTO, and FROM clauses must be specified. The other clauses are
optional.
The result table for a SELECT INTO should contain just one row. For example,
each row in the CORPDATA.EMPLOYEE table has a unique EMPNO (employee
number) column. The result of a SELECT INTO statement for this table if the
WHERE clause contains an equal comparison on the EMPNO column, would be
exactly one row (or no rows). Finding more than one row is an error, but one row is
still returned. You can control which row will be returned in this error condition by
specifying the ORDER BY clause. If you use the ORDER BY clause, the first row in
the result table is returned.
If you want more than one row to be the result of a select-statement, use a
DECLARE CURSOR statement to select the rows, followed by a FETCH statement
to move the column values into host variables one or many rows at a time. Using
cursors is described in “Chapter 4. Using a Cursor” on page 55.
The FROM clause names the table (or view) that contains the data you are
interested in.
PGM-DEPTNAME PGM-MGRNO
INFORMATION CENTER 000030
If SQL is unable to find a row that satisfies the search condition, an SQLCODE of
+100 is returned.
You can retrieve data from a view in exactly the same way you retrieve data from a
table. However, there are several restrictions when you attempt to update, insert, or
delete data in a view. These restrictions are described in “Creating and Using
Views” on page 95.
If SQL finds a data mapping error while running a statement, one of two things
occurs:
v If the error occurs on an expression in the SELECT list and an indicator variable
is provided for the expression in error:
– SQL returns a −2 for the indicator variable corresponding to the expression in
error.
– SQL returns all valid data for that row.
– SQL returns a positive SQLCODE.
v If an indicator variable is not provided, SQL returns the corresponding negative
SQLCODE in the SQLCA.
For data mapping errors, the SQLCA reports only the last error detected. The
indicator variable corresponding to each result column having an error is set to −2.
If the full-select contains DISTINCT in the select list and a column in the select list
contains numeric data that is not valid, the data is considered equal to a null value
if the query is completed as a sort. If an existing index is used, the data is not
considered equal to a null.
The impact of data mapping errors on the ORDER BY clause depends on the
situation:
v If the data mapping error occurs while data is being assigned to a host variable
in a SELECT INTO or FETCH statement, and that same expression is used in
the ORDER BY clause, the result record is ordered based on the value of the
expression. It is not ordered as if it were a null (higher than all other values). This
is because the expression was evaluated before the assignment to the host
variable is attempted.
v If the data mapping error occurs while an expression in the select-list is being
evaluated and the same expression is used in the ORDER BY clause, the result
column is normally ordered as if it were a null value (higher than all other
values). If the ORDER BY clause is implemented by using a sort, the result
column is ordered as if it were a null value. If the ORDER BY clause is
You can specify that only one column be retrieved, or as many as 8000 columns.
The value of each column you name is retrieved in the order specified in the
SELECT clause.
If you want to retrieve all columns (in the same order as they appear in the row),
use an asterisk (*) instead of naming the columns:
SELECT *
.
.
.
When using the select-statement in an application program, list the column names
to give your program more data independence. There are two reasons for this:
1. When you look at the source code statement, you can easily see the one-to-one
correspondence between the column names in the SELECT clause and the host
variables named in the INTO clause.
2. If a column is added to a table or view you access and you use “SELECT * ...,”
and you create the program again from source, the INTO clause does not have
a matching host variable named for the new column. The extra column causes
you to get a warning (not an error) in the SQLCA (SQLWARN4 will contain a
“W”).
If the search condition contains character or UCS-2 graphic column predicates, the
sort sequence that is in effect when the query is run is applied to those predicates.
See “Using Sort Sequence in SQL” on page 50 for more information on sort
sequence and selection.
However, you cannot compare character strings to numbers. You also cannot
perform arithmetic operations on character data (even though EMPNO is a
character string that appears to be a number). You can add and subtract
date/time values.
v An expression identifies two values that are added (+), subtracted (−), multiplied
(*), divided (/), have exponentiation (**), or concatenated (CONCAT or ||) to result
in a value. The operands of an expression can be:
A constant (that is, a literal value)
A column
A host variable
A value returned from a function
A special register
Another expression
For example:
... WHERE INTEGER(PRENDATE - PRSTDATE) > 100
Operators on the same precedence level are applied from left to right.
v A constant specifies a literal value for the expression. For example:
... WHERE 40000 < SALARY
A search condition need not be limited to two column names or constants separated
by arithmetic or comparison operators. You can develop a complex search condition
that specifies several predicates separated by AND and OR. No matter how
complex the search condition, it supplies a TRUE or FALSE value when evaluated
against a row. There is also an unknown truth value, which is effectively false. That
is, if the value of a row is null, this null value is not returned as a result of a search
because it is not less than, equal to, or greater than the value specified in the
search condition. More complex search conditions and predicates are described in
“Performing Complex Search Conditions” on page 72.
To fully understand the WHERE clause, you need to know how SQL evaluates
search conditions and predicates, and compares the values of expressions. This
topic is discussed in the DB2 UDB for AS/400 SQL Reference book.
Comparison Operators
SQL supports the following comparison operators:
= Equal to
<> or ¬= Not equal to
< Less than
> Greater than
<= or ¬> Less than or equal to (or not greater than)
> = or ¬< Greater than or equal to (or not less than)
| The GROUP BY clause allows you to find the characteristics of groups of rows
| rather than individual rows. When you specify a GROUP BY clause, SQL divides
| the selected rows into groups such that the rows of each group have matching
| values in one or more columns or expressions. Next, SQL processes each group to
| produce a single-row result for the group. You can specify one or more columns or
For example, the CORPDATA.EMPLOYEE table has several sets of rows, and each
set consists of rows describing members of a specific department. To find the
average salary of people in each department, you could issue:
RV2W551-1
| When you use GROUP BY, you list the columns or expressions you want SQL to
| use to group the rows. For example, suppose you want a list of the number of
| people working on each major project described in the CORPDATA.PROJECT
| table. You could issue:
RV2W552-3
| You can also specify that you want the rows grouped by more than one column or
| expression. For example, you could issue a select-statement to find the average
| salary for men and women in each department, using the CORPDATA.EMPLOYEE
| table. To do this, you could issue:
RV2W553-1
Because you did not include a WHERE clause in this example, SQL examines and
process all rows in the CORPDATA.EMPLOYEE table. The rows are grouped first
by department number and next (within each department) by sex before SQL
derives the average SALARY value for each group.
The HAVING clause follows the GROUP BY clause and can contain the same kind
of search condition you can specify in a WHERE clause. In addition, you can
specify column functions in a HAVING clause. For example, suppose you wanted to
retrieve the average salary of women in each department. To do this, you would
use the AVG column function and group the resulting rows by WORKDEPT and
specify a WHERE clause of SEX = 'F'.
To specify that you want this data only when all the female employees in the
selected department have an education level equal to or greater than 16 (a college
graduate), use the HAVING clause. The HAVING clause tests a property of the
group. In this case, the test is on MIN(EDLEVEL), which is a group property:
RV2W554-3
You can use multiple predicates in a HAVING clause by connecting them with AND
and OR, and you can use NOT for any predicate of a search condition.
Note: If you intend to update a column or delete a row, you cannot include a
GROUP BY or HAVING clause in the SELECT statement within a DECLARE
CURSOR statement. (The DECLARE CURSOR statement is described in
“Chapter 4. Using a Cursor” on page 55.)
Predicates with arguments that are not column functions can be coded in either
WHERE or HAVING clauses. It is usually more efficient to code the selection criteria
in the WHERE clause. It is processed during the initial phase of the query
processing. The HAVING selection is performed in post processing of the result
table.
For example, to retrieve the names and department numbers of female employees
listed in the alphanumeric order of their department numbers, you could use this
select-statement:
RV2W555-3
Notes:
1. All columns named in the ORDER BY clause must also be named in the
SELECT list.
2. Null values are ordered as the highest value.
| To order by a column function, or something other than a column name, you can
| specify an AS clause in the select-list. To order by an expression, you can either
| specify the exact same expression in the ORDER BY clause, or you can specify an
| AS clause in the select-list.
The AS clause names the result column. This name can be specified in the ORDER
BY clause. To order by a name specified in the AS clause:
v The name must be unique in the select-list.
v The name must not be qualified.
For example, to retrieve the full name of employees listed in alphabetic order, you
could use this select-statement:
SELECT LASTNAME CONCAT FIRSTNAME AS FULLNAME ...
ORDER BY FULLNAME
| Instead of naming the columns to order the results, you can use a number. For
example, ORDER BY 3 specifies that you want the results ordered by the third
column of the results table, as specified by the select-statement. Use a number to
order the rows of the results table when the sequencing value is not a named
column.
You can also specify whether you want SQL to collate the rows in ascending (ASC)
or descending (DESC) sequence. An ascending collating sequence is the default. In
the above select-statement, SQL first returns the row with the lowest department
As with GROUP BY, you can specify a secondary ordering sequence (or several
levels of ordering sequences) as well as a primary one. In the example above, you
might want the rows ordered first by department number, and within each
department, ordered by employee name. To do this, specify:
... ORDER BY WORKDEPT, LASTNAME
If character columns or UCS-2 graphic columns are used in the ORDER BY clause,
ordering for these columns is based on the sort sequence in effect when the query
is run. See “Using Sort Sequence in SQL” on page 50 for more information on sort
sequence and its affect on ordering.
To get the rows that do not have a null value for the manager number, you could
change the WHERE clause like this:
WHERE MGRNO IS NOT NULL
For more information on the use of null values, see the DB2 UDB for AS/400 SQL
Reference book.
If a single statement contains more than one reference to any of CURRENT DATE,
CURRENT TIME, or CURRENT TIMESTAMP special registers, or the CURDATE,
CURTIME, or NOW scalar functions, all values are based on a single clock reading.
For remotely run SQL statements, the special registers and their contents are
shown in the following table:
When a query over a distributed table references a special register, the contents of
the special register on the system that requests the query are used. For more
information on distributed tables, see DB2 Multisystem for AS/400 book.
Date/Time Arithmetic
Addition and subtraction are the only arithmetic operators applicable to date, time,
and timestamp values. You can increment and decrement a date, time, or
timestamp by a duration; or subtract a date from a date, a time from a time, or a
timestamp from a timestamp. For a detailed description of date and time arithmetic,
see Chapter 2 of the DB2 UDB for AS/400 SQL Reference book.
| A table alias defines a name for the file, including the specific member name. You
| can use this alias name in an SQL statement in the same way that you would use a
| table name. Unlike overrides, alias names are objects that exist until they are
| dropped.
For example, if there is a multiple member file MYLIB.MYFILE with members MBR1
and MBR2, an alias can be created for the second member so that SQL can easily
refer to it.
CREATE ALIAS MYLIB.MYMBR2_ALIAS FOR MYLIB.MYFILE (MBR2)
Alias names can also be specified on DDL statements. Assume that alias
MYLIB.MYALIAS exists and is an alias for table MYLIB.MYTABLE. The following
DROP statement will drop table MYLIB.MYTABLE.
DROP TABLE MYLIB.MYALIAS
If you really want to drop the alias name instead, specify the ALIAS keyword on the
drop statement:
DROP ALIAS MYLIB.MYALIAS
Using LABEL ON
Sometimes the table name, column name, view name, alias name, or SQL package
name does not clearly define data that is shown on an interactive display of the
table. By using the LABEL ON statement, you can create a more descriptive label
for the table name, column name, view name, alias name, or SQL package name.
These labels can be seen in the SQL catalog in the LABEL column.
LABEL ON
COLUMN CORPDATA.DEPARTMENT.ADMRDEPT IS 'Reports to Dept.'
After these statements are run, the table named DEPARTMENT displays the text
description as Department Structure Table and the column named ADMRDEPT
displays the heading Reports to Dept. The label for tables, views, SQL packages,
This LABEL ON statement provides 3 levels of column headings for the SALARY
column.
*...+....1....+....2....+....3....+....4....+....5....+....6..*
LABEL ON COLUMN CORPDATA.EMPLOYEE.SALARY IS
'Yearly Salary (in dollars)'
This LABEL ON statement provides column text for the EDLEVEL column.
*...+....1....+....2....+....3....+....4....+....5....+....6..*
LABEL ON COLUMN CORPDATA.EMPLOYEE.EDLEVEL TEXT IS
'Number of years of formal education'
For more information about the LABEL ON statement, see the DB2 UDB for AS/400
SQL Reference book.
Using COMMENT ON
| After you create an SQL object such as a table, view, index, package, procedure,
| parameter, user-defined type, or function, you can supply information about it for
| future referral, such as the purpose of the object, who uses it, and anything unusual
| or special about it. You can also include similar information about each column of a
| table or view. Your comment must not be more than 2000 bytes.
A comment is especially useful if your names do not clearly indicate the contents of
the columns or objects. In that case, use a comment to describe the specific
contents of the column or objects.
The sort sequence is used for all character and UCS-2 graphic comparisons
performed in SQL statements. There are sort sequence tables for both single byte
and double byte character data. Each single byte sort sequence table has an
associated double byte sort sequence table, and vice versa. Conversion between
the two tables is performed when necessary to implement a query. In addition, the
CREATE INDEX statement has the sort sequence (in effect at the time the
statement was run) applied to the character columns referred to in the index.
In the following examples, the results are shown for each statement using:
v *HEX sort sequence
v Shared-weight sort sequence using the language identifier ENU
v Unique-weight sort sequence using the language identifier ENU
ORDER BY
The following SQL statement causes the result table to be sorted using the values
in the JOB column:
SELECT * FROM STAFF ORDER BY JOB
Table 3 shows the result table using a *HEX sort sequence. The rows are sorted
based on the EBCDIC value in the JOB column. In this case, all lowercase letters
sort before the uppercase letters.
Table 3. ″SELECT * FROM STAFF ORDER BY JOB″ Using the *HEX Sort Sequence.
ID NAME DEPT JOB YEARS SALARY COMM
100 Plotz 42 mgr 6 18352.80 0
90 Koonitz 42 sales 6 18001.75 1386.70
80 James 20 Clerk 0 13504.60 128.20
10 Sanders 20 Mgr 7 18357.50 0
50 Hanes 15 Mgr 10 20659.80 0
30 Merenghi 38 MGR 5 17506.75 0
20 Pernal 20 Sales 8 18171.25 612.45
40 OBrien 38 Sales 6 18006.00 846.55
70 Rothman 15 Sales 7 16502.83 1152.00
60 Quigley 38 SALES 0 16808.30 650.25
Table 4 shows how sorting is done for a unique-weight sort sequence. After the sort
sequence is applied to the values in the JOB column, the rows are sorted. Notice
that after the sort, lowercase letters are before the same uppercase letters, and the
values 'mgr', 'Mgr', and 'MGR' are adjacent to each other.
Table 4. ″SELECT * FROM STAFF ORDER BY JOB″ Using the Unique-Weight Sort
Sequence for the ENU Language Identifier.
ID NAME DEPT JOB YEARS SALARY COMM
80 James 20 Clerk 0 13504.60 128.20
100 Plotz 42 mgr 6 18352.80 0
10 Sanders 20 Mgr 7 18357.50 0
50 Hanes 15 Mgr 10 20659.80 0
30 Merenghi 38 MGR 5 17506.75 0
90 Koonitz 42 sales 6 18001.75 1386.70
20 Pernal 20 Sales 8 18171.25 612.45
40 OBrien 38 Sales 6 18006.00 846.55
70 Rothman 15 Sales 7 16502.83 1152.00
60 Quigley 38 SALES 0 16808.30 650.25
Table 5 on page 52 shows how sorting is done for a shared-weight sort sequence.
After the sort sequence is applied to the values in the JOB column, the rows are
sorted. For the
Chapter 3. Basic Concepts and Techniques 51
sort comparison, each lowercase letter is treated the same as the corresponding
uppercase letter. In Table 5, notice that all the values 'MGR', 'mgr' and 'Mgr' are
mixed together.
Table 5. ″SELECT * FROM STAFF ORDER BY JOB″ Using the Shared-Weight Sort
Sequence for the ENU Language Identifier.
ID NAME DEPT JOB YEARS SALARY COMM
80 James 20 Clerk 0 13504.60 128.20
10 Sanders 20 Mgr 7 18357.50 0
30 Merenghi 38 MGR 5 17506.75 0
50 Hanes 15 Mgr 10 20659.80 0
100 Plotz 42 mgr 6 18352.80 0
20 Pernal 20 Sales 8 18171.25 612.45
40 OBrien 38 Sales 6 18006.00 846.55
60 Quigley 38 SALES 0 16808.30 650.25
70 Rothman 15 Sales 7 16502.83 1152.00
90 Koonitz 42 sales 6 18001.75 1386.70
Record selection
The following SQL statement selects records with the value 'MGR' in the JOB
column:
SELECT * FROM STAFF WHERE JOB='MGR'
Table 6 shows how record selection is done with a *HEX sort sequence. In Table 6,
the rows that match the record selection criteria for the column 'JOB' are selected
exactly as specified in the select statement. Only the uppercase 'MGR' is selected.
Table 6. ″SELECT * FROM STAFF WHERE JOB=’MGR’ Using the *HEX Sort Sequence.″
ID NAME DEPT JOB YEARS SALARY COMM
30 Merenghi 38 MGR 5 17506.75 0
Table 7 shows how record selection is done with a unique-weight sort sequence. In
Table 7, the lowercase and uppercase letters are treated as unique. The lowercase
'mgr' is not treated the same as uppercase 'MGR'. Therefore, the lower case 'mgr'
is not selected.
Table 7. ″SELECT * FROM STAFF WHERE JOB = ’MGR’ ″ Using Unique-Weight Sort
Sequence for the ENU Language Identifier.
ID NAME DEPT JOB YEARS SALARY COMM
30 Merenghi 38 MGR 5 17506.75 0
Table 8 shows how record selection is done with a shared-weight sort sequence. In
Table 8, the rows that match the record selection criteria for the column 'JOB' are
selected by treating uppercase letters the same as lowercase letters. Notice that in
Table 8 all the values 'mgr', 'Mgr' and 'MGR' are selected.
Table 8. ″SELECT * FROM STAFF WHERE JOB = ’MGR’ ″ Using the Shared-Weight Sort
Sequence for the ENU Language Identifier.
ID NAME DEPT JOB YEARS SALARY COMM
10 Sanders 20 Mgr 7 18357.50 0
The following SQL statements and tables show how views and sort sequences
work. View V1, used in the following examples, was created with a shared-weight
sort sequence of SRTSEQ(*LANGIDSHR) and LANGID(ENU). The CREATE VIEW
statement would be as follows:
CREATE VIEW V1 AS SELECT *
FROM STAFF
WHERE JOB = 'MGR' AND ID < 100
Any queries run against view V1 are run against the result table shown in Table 9.
The query shown below is run with a sort sequence of SRTSEQ(*LANGIDUNQ)
and LANGID(ENU).
Table 10. ″SELECT * FROM V1 WHERE JOB = ’MGR’″ Using the Unique-Weight Sort
Sequence for Language Identifier ENU
ID NAME DEPT JOB YEARS SALARY COMM
30 Merenghi 38 MGR 5 17506.75 0
If defining a referential constraint, the sort sequence between the parent and
dependent table must match. For more information on sort sequence and
constraints, see the DB2 UDB for AS/400 Database Programming book.
The sort sequence used at the time a check constraint is defined is the same sort
sequence the system uses to validate adherence to the constraint at the time of an
INSERT or UPDATE.
Types of cursors
SQL supports serial and scrollable cursors. The type of cursor determines the
positioning methods which can be used with the cursor.
Serial cursor
A serial cursor is one defined without the SCROLL keyword.
For a serial cursor, each row of the result table can be fetched only once per OPEN
of the cursor. When the cursor is opened, it is positioned before the first row in the
result table. When a FETCH is issued, the cursor is moved to the next row in the
result table. That row is then the current row. If host variables are specified (with
the INTO clause on the FETCH statement), SQL moves the current row’s contents
into your program’s host variables.
This sequence is repeated each time a FETCH statement is issued until the
end-of-data (SQLCODE = 100) is reached. When you reach the end-of-data, close
the cursor. You cannot access any rows in the result table after you reach the
end-of-data. To use the cursor again, you must first close the cursor and then
re-issue the OPEN statement. You can never back up.
Scrollable cursor
For a scrollable cursor, the rows of the result table can be fetched many times. The
cursor is moved through the result table based on the position option specified on
the FETCH statement. When the cursor is opened, it is positioned before the first
row in the result table. When a FETCH is issued, the cursor is positioned to the row
in the result table that is specified by the position option. That row is then the
current row. If host variables are specified (with the INTO clause on the FETCH
This sequence is repeated each time a FETCH statement is issued. The cursor
does not need to be closed when an end-of-data or beginning-of-data condition
occurs. The position options enable the program to continue fetching rows from the
table.
The following scroll options are used to position the cursor when issuing a FETCH
statement. These positions are relative to the current cursor location in the result
table.
NEXT Positions the cursor on the next row. This is the default if no
position is specified.
PRIOR Positions the cursor on the previous row.
FIRST Positions the cursor on the first row.
LAST Positions the cursor on the last row.
BEFORE Positions the cursor before the first row.
AFTER Positions the cursor after the last row.
CURRENT Does not change the cursor position.
RELATIVE n Evaluates a host variable or integer n in relationship to the
cursor’s current position. For example, if n is -1, the cursor is
positioned on the previous row of the result table. If n is +3,
the cursor is positioned three rows after the current row.
For the serial cursor example, the program processes all of the rows from the table,
updating the job for all members of department D11 and deleting the records of
employees from the other departments.
Table 11. A Serial Cursor Example
Serial Cursor SQL Statement Described in Section
EXEC SQL “Step 1: Define the Cursor” on page 58.
DECLARE THISEMP CURSOR FOR
SELECT EMPNO, LASTNAME,
WORKDEPT, JOB
FROM CORPDATA.EMPLOYEE
FOR UPDATE OF JOB
END-EXEC.
EXEC SQL “Step 2: Open the Cursor” on page 59.
OPEN THISEMP
END-EXEC.
EXEC SQL “Step 3: Specify What to Do When
WHENEVER NOT FOUND End-of-Data Is Reached” on page 59.
GO TO CLOSE-THISEMP
END-EXEC.
EXEC SQL
UPDATE CORPDATA.EMPLOYEE
SET JOB = :NEW-CODE
WHERE CURRENT OF THISEMP
END-EXEC.
EXEC SQL
DELETE FROM CORPDATA.EMPLOYEE
WHERE CURRENT OF THISEMP
END-EXEC.
Branch back to fetch and process the next
row.
CLOSE-THISEMP. “Step 6: Close the Cursor” on page 62.
EXEC SQL
CLOSE THISEMP
END-EXEC.
For the scrollable cursor example, the program uses the RELATIVE position option
to obtain a representative sample of salaries from department D11.
Table 12. Scrollable Cursor Example
Scrollable Cursor SQL Statement Described in Section
EXEC SQL “Step 1: Define the Cursor” on page 58.
DECLARE THISEMP DYNAMIC SCROLL CURSOR FOR
SELECT EMPNO, LASTNAME,
SALARY
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = ’D11’
END-EXEC.
EXEC SQL “Step 2: Open the Cursor” on page 59.
OPEN THISEMP
END-EXEC.
EXEC SQL “Step 3: Specify What to Do When
WHENEVER NOT FOUND End-of-Data Is Reached” on page 59.
GO TO CLOSE-THISEMP
END-EXEC.
For a scrollable cursor, the statement looks like this (the WHERE clause is
optional):
EXEC SQL
DECLARE cursor-name DYNAMIC SCROLL CURSOR FOR
SELECT column-1, column-2 ,...
FROM table-name ,...
WHERE column-1 = expression ...
END-EXEC.
The select-statements shown here are rather simple. However, you can code
several other types of clauses in a select-statement within a DECLARE CURSOR
statement for a serial and a scrollable cursor.
If you intend to update any columns in any or all of the rows of the identified table
(the table named in the FROM clause), include the FOR UPDATE OF clause. It
names each column you intend to update. If you do not specify the names of
columns, and you specify either the ORDER BY clause or FOR READ ONLY
clause, a negative SQLCODE is returned if an update is attempted. If you do not
You can update a column of the identified table even though it is not part of the
result table. In this case, you do not need to name the column in the SELECT
statement. When the cursor retrieves a row (using FETCH) that contains a column
value you want to update, you can use UPDATE ... WHERE CURRENT OF to
update the row.
For example, assume that each row of the result table includes the EMPNO,
LASTNAME, and WORKDEPT columns from the CORPDATA.EMPLOYEE table. If
you want to update the JOB column (one of the columns in each row of the
CORPDATA.EMPLOYEE table), the DECLARE CURSOR statement should include
FOR UPDATE OF JOB ... even though JOB is omitted from the SELECT statement.
The result table and cursor are read-only if any of the following are true:
v The first FROM clause identifies more than one table or view.
v The first FROM clause identifies a read-only view.
v The first SELECT clause specifies the keyword DISTINCT.
v The outer subselect contains a GROUP BY clause.
v The outer subselect contains a HAVING clause.
v The first SELECT clause contains a column function.
v The select-statement contains a subquery such that the base object of the outer
subselect and of the subquery is the same table.
v The select-statement contains a UNION or UNION ALL operator.
v The select-statement contains an ORDER BY clause, and the FOR UPDATE OF
clause and DYNAMIC SCROLL are not specified.
v The select-statement includes a FOR READ ONLY clause.
v The SCROLL keyword is specified without DYNAMIC.
| v The select-list includes a DataLink column and a FOR UPDATE OF clause is not
| specified.
3. A result table can contain zero, one, or many rows, depending on the extent to which the search condition is satisfied.
or
When you are using a serial cursor and the end-of-data is reached, every
subsequent FETCH statement returns the end-of-data condition. You cannot
position the cursor on rows that are already processed. The CLOSE statement is
the only operation that can be performed on the cursor.
When you are using a scrollable cursor and the end-of-data is reached, the result
table can still process more data. You can position the cursor anywhere in the result
table using a combination of the position options. You do not need to CLOSE the
cursor when the end-of-data is reached.
When your program issues the FETCH statement, SQL uses the current cursor
position as a starting point to locate the requested row in the result table. This
changes that row to the current row. If an INTO clause was specified, SQL moves
the current row’s contents into your program’s host variables. This sequence is
repeated each time the FETCH statement is issued.
SQL maintains the position of the current row (that is, the cursor points to the
current row) until the next FETCH statement for the cursor is issued. The UPDATE
statement does not change the position of the current row within the result table,
although the DELETE statement does.
After you update a row, the cursor’s position remains on that row (that is, the
current row of the cursor does not change) until you issue a FETCH statement for
the next row.
After you delete a row, you cannot update or delete another row using that cursor
until you issue a FETCH statement to position the cursor.
“The DELETE Statement” on page 34 shows you how to use the DELETE
statement to delete all rows that meet a specific search condition. You can also use
the FETCH and DELETE ... WHERE CURRENT OF statements when you want to
obtain a copy of the row, examine it, then delete it.
If you processed the rows of a result table and you do not want to use the cursor
again, you can let the system close the cursor. The system automatically closes the
cursor when:
v A COMMIT without HOLD statement is issued and the cursor is not declared
using the WITH HOLD clause.
v A ROLLBACK without HOLD statement is issued.
v The job ends.
v The activation group ends and CLOSQLCSR(*ENDACTGRP) was specified on
the precompile.
v The first SQL program in the call stack ends and neither
CLOSQLCSR(*ENDJOB) or CLOSQLCSR(*ENDACTGRP) was specified when
the program was precompiled.
v The connection to the application server is ended using the DISCONNECT
statement.
v The connection to the application server was released and a successful COMMIT
occurred.
v An *RUW CONNECT occurred.
Because an open cursor still holds locks on referred-to-tables or views, you
should explicitly close any open cursors as soon as they are no longer needed.
There are two ways to define the storage where fetched rows are placed: a host
structure array or a row storage area with an associated descriptor. Both methods
can be coded in all of the languages supported by the SQL precompilers, with the
exception of the host structure array in REXX. Refer to Chapter 12. Coding SQL
Statements in C and C++ Applications, through Chapter 17. Coding SQL
Statements in REXX Applications, for more information on the programming
languages. Both forms of the multiple-row FETCH statement allow the application to
code a separate indicator array. The indicator array should contain one indicator for
each host variable that is null capable.
The multiple-row FETCH statement can be used with both serial and scrollable
cursors. The operations used to define, open, and close a cursor for a multiple-row
FETCH remain the same. Only the FETCH statement changes to specify the
number of rows to retrieve and the storage where the rows are placed.
...
01 TABLE-1.
02 DEPT OCCURS 10 TIMES.
05 EMPNO PIC X(6).
05 LASTNAME.
49 LASTNAME-LEN PIC S9(4) BINARY.
49 LASTNAME-TEXT PIC X(15).
05 WORKDEPT PIC X(3).
05 JOB PIC X(8).
01 TABLE-2.
02 IND-ARRAY OCCURS 10 TIMES.
05 INDS PIC S9(4) BINARY OCCURS 4 TIMES.
...
EXEC SQL
DECLARE D11 CURSOR FOR
SELECT EMPNO, LASTNAME, WORKDEPT, JOB
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = "D11"
END-EXEC.
...
EXEC SQL
OPEN D11
...
FETCH-PARA.
EXEC SQL WHENEVER NOT FOUND GO TO ALL-DONE END-EXEC.
EXEC SQL FETCH D11 FOR 10 ROWS INTO :DEPT :IND-ARRAY
END-EXEC.
...
The host structure array DEPT and the associated indicator array IND-ARRAY are
defined in the application. Both arrays have a dimension of ten. The indicator array
has an entry for each column in the result table.
The attributes of type and length of the DEPT host structure array elementary items
match the columns that are being retrieved.
When the multiple-row FETCH statement has successfully completed, the host
structure array contains the data for all eight rows. The indicator array, IND_ARRAY,
contains zeros for every column in every row because no NULL values were
returned.
The SQLCA that is returned to the application contains the following information:
v SQLCODE contains 0
v SQLSTATE contains '00000'
v SQLERRD3 contains 8, the number of rows fetched
v SQLERRD4 contains 34, the length of each row
v SQLERRD5 contains +100, indicating the last row in the result table is in the
block
See Appendix B of the DB2 UDB for AS/400 SQL Reference book for a description
of the SQLCA.
See Appendix C of the DB2 UDB for AS/400 SQL Reference book for a description
of the SQLDA.
...
...
...
EXEC SQL
OPEN D11;
/* SET UP THE DESCRIPTOR FOR THE MULTIPLE-ROW FETCH */
/* 4 COLUMNS ARE BEING FETCHED */
SQLD = 4;
SQLN = 4;
SQLDABC = 366;
SQLTYPE(1) = 452; /* FIXED LENGTH CHARACTER - */
/* NOT NULLABLE */
SQLLEN(1) = 6;
SQLTYPE(2) = 456; /*VARYING LENGTH CHARACTER */
/* NOT NULLABLE */
SQLLEN(2) = 15;
SQLTYPE(3) = 452; /* FIXED LENGTH CHARACTER - */
SQLLEN(3) = 3;
SQLTYPE(4) = 452; /* FIXED LENGTH CHARACTER - */
/* NOT NULLABLE */
SQLLEN(4) = 8;
/*ISSUE THE MULTIPLE-ROW FETCH STATEMENT TO RETRIEVE*/
/*THE DATA INTO THE DEPT ROW STORAGE AREA */
/*USE A HOST VARIABLE TO CONTAIN THE COUNT OF */
/*ROWS TO BE RETURNED ON THE MULTIPLE-ROW FETCH */
In this example, a cursor has been defined for the CORPDATA.EMPLOYEE table to
select all rows where the WORKDEPT column equal 'D11'. The sample EMPLOYEE
table in Appendix A. DB2 UDB for AS/400 Sample Tables shows the result table
contains eight rows. The DECLARE CURSOR and OPEN statements do not have
special syntax when they are used with a multiple-row FETCH statement. Another
FETCH statement that returns a single row against the same cursor can be coded
elsewhere in the program. The multiple-row FETCH statement is used to retrieve all
rows in the result table. Following the FETCH, the cursor position remains on the
eighth record in the block.
The row area, ROWAREA, is defined as a character array. The data from the result
table is placed in the host variable. In this example, a pointer variable is assigned to
the address of ROWAREA. Each item in the rows that are returned is examined
and used with the based structure DEPT.
The attributes (type and length) of the items in the descriptor match the columns
that are retrieved. In this case, no indicator area is provided.
After the FETCH statement is completed, the ROWAREA contains eight rows. The
SQLCA that is returned to the application contains the following:
v SQLCODE contains 0
v SQLSTATE contains '00000'
v SQLERRD3 contains 8, the number of rows returned
v SQLERRD4 contains 34, for the length of the row fetched
v SQLERRD5 contains +100, indicating the last row in the result table was fetched
In this example, the application has taken advantage of the fact that SQLERRD5
contains an indication of the end of the file being reached. As a result, the
application does not need to call SQL again to attempt to retrieve more rows. If the
If you want to continue processing from the current cursor position after a COMMIT
or ROLLBACK, you must specify COMMIT HOLD or ROLLBACK HOLD. When
HOLD is specified, any open cursors are left open and keep their cursor position so
processing can resume. On a COMMIT statement, the cursor position is
maintained. On a ROLLBACK statement, the cursor position is restored to just after
the last row retrieved from the previous unit of work. All record locks are still
released.
After issuing a COMMIT or ROLLBACK statement without HOLD, all locks are
released and all cursors are closed. You can open the cursor again, but you will
begin processing at the first row of the result table.
One use for this kind of INSERT statement is to move data into a table you created
for summary data. For example, suppose you want a table that shows each
employee’s time commitments to projects. You could create a table called
EMPTIME with the columns EMPNUMBER, PROJNUMBER, STARTDATE,
ENDDATE, and TTIME, and then use the following INSERT statement to fill the
table:
INSERT INTO CORPDATA.EMPTIME
(EMPNUMBER, PROJNUMBER, STARTDATE, ENDDATE)
SELECT EMPNO, PROJNO, EMSTDATE, EMENDATE
FROM CORPDATA.EMP_ACT
DSTRUCT is a host structure array with five elements that is declared in the
program. The five elements correspond to EMPNO, FIRSTNME, MIDINIT,
LASTNAME, and WORKDEPT. DSTRUCT has a dimension of at least ten to
accommodate inserting ten rows. ISTRUCT is a host structure array that is declared
in the program. ISTRUCT has a dimension of at least ten small integer fields for the
indicators.
Blocked INSERT statements are supported for non-distributed SQL applications and
for distributed applications where both the application server and the application
requester are AS/400 systems.
The previous update can also be written by specifying all of the columns and then
all of the values:
UPDATE EMPLOYEE
SET (WORKDEPT, PHONENO, JOB)
= ('D11', '7213', 'DESIGNER')
WHERE EMPNO = '000270'
Another way to select a value (or multiple values) for an update is to use a
scalar-subselect. The scalar-subselect allows you to update one or more columns
by setting them to one or more values selected from another table. In the following
example, an employee moves to a different department but continues working on
the same projects. The employee table has already been updated to contain the
new department number. Now the project table needs to be updated to reflect the
new department number of this employee (employee number is ’000030’).
UPDATE PROJECT
SET DEPTNO =
(SELECT WORKDEPT FROM EMPLOYEE
WHERE PROJECT.RESPEMP = EMPLOYEE.EMPNO)
WHERE RESPEMP='000030'
This same technique could be used to update a list of columns with multiple values
returned from a single select.
It is also possible to update an entire row in one table with values from a row in
another table.
Suppose there is a master class schedule table that needs to be udpated with
changes that have been made in a copy of the table. The changes are made to the
work copy and merged into the master table every night. The two tables have
exactly the same columns and one column, CLASS_CODE, is a unique key
column.
UPDATE CL_SCHED
SET ROW =
(SELECT * FROM MYCOPY
WHERE CL_SCHED.CLASS_CODE = MYCOPY.CLASS_CODE)
This update will update all of the rows in CL_SCHED with the values from
MYCOPY.
DISTINCT means you want to select only the unique rows. If a selected row
duplicates another row in the result table, the duplicate row is ignored (it is not put
The result is two rows (in this example, JOB-DEPT is set to D11).
fetch JOB
1 Designer
RV2W557-2
If you do not include DISTINCT in a SELECT clause, you might find duplicate rows
in your result, because SQL retrieves the JOB column’s value for each row that
satisfies the search condition. Null values are treated as duplicate rows for
DISTINCT.
If you include DISTINCT in a SELECT clause and you also include a shared-weight
sort sequence, fewer values are returned. The sort sequence causes values that
contain the same characters to be weighted the same. If 'MGR', 'Mgr', and 'mgr'
were all in the same table, only one of these values would be returned.
Note: Constants are shown in the following examples to keep the examples simple.
However, you could just as easily code host variables instead. Remember to
precede each host variable with a colon.
For character and UCS-2 graphic column predicates, the sort sequence is applied
to the operands before evaluation of the predicates for BETWEEN, IN, EXISTS, and
LIKE clauses. See “Using Sort Sequence in SQL” on page 50 for more information
on the using sort sequence with selection.
v BETWEEN ... AND ... is used to specify a search condition that is satisfied by
any value that falls on or between two other values. For example, to find all
employees who were hired in 1987, you could use this:
... WHERE HIREDATE BETWEEN '1987-01-01' AND '1987-12-31'
Note: If you are operating on MIXED data, the following distinction applies: an
SBCS underline character refers to one SBCS character. No such
restriction applies to the percent sign; that is, a percent sign refers to any
number of SBCS or DBCS characters. See the DB2 UDB for AS/400 SQL
Reference book for more information on the LIKE predicate and MIXED
data.
Use the underline character or percent sign either when you do not know or do
not care about all the characters of the column’s value. For example, to find out
which employees live in Minneapolis, you could specify:
... WHERE ADDRESS LIKE '%MINNEAPOLIS%'
In this case, you should be sure that MINNEAPOLIS was not part of a street
address or part of another city name. SQL returns any row with the string
MINNEAPOLIS in the ADDRESS column, no matter where the string occurs.
In another example, to list the towns whose names begin with 'SAN', you could
specify:
... WHERE TOWN LIKE 'SAN%'
If you want to search for a character string that contains either the underscore or
percent character, use the ESCAPE clause to specify an escape character. For
example, to see all businesses that have a percent in their name, you could
specify:
... WHERE BUSINESS_NAME LIKE '%@%%' ESCAPE '@'
The first and last percent characters are interpreted as usual. The combination
’@%’ is taken as the actual percent character.
For example, if you did a search using the search pattern ’ABC%’ contained in a
host variable with a fixed length of 10, these are some the values that could be
returned assuming the column has a length of 12:
'ABCDE ' 'ABCD ' 'ABCxxx ' 'ABC '
Note that all returned values start with ’ABC’ and end with at least six blanks.
This is because the last six characters in the host variable were not assigned a
specific value so blanks were used.
You can join any two predicates with the connectors AND and OR. In addition, you
can use the NOT keyword to specify that the desired search condition is the
negated value of the specified search condition. A WHERE clause can have as
many predicates as you want.
v AND says that, for a row to qualify, the row must satisfy both predicates of the
search condition. For example, to find out which employees in department D21
were hired after December 31, 1987, you would specify:
...
WHERE WORKDEPT = 'D21' AND HIREDATE > '1987-12-31'
v OR says that, for a row to qualify, the row can satisfy the condition set by either
or both predicates of the search condition. For example, to find out which
employees are in either department C01 or D11, you could specify 4:
...
WHERE WORKDEPT = 'C01' OR WORKDEPT = 'D11'
v NOT says that, to qualify, a row must not meet the criteria set by the search
condition or predicate that follows the NOT. For example, to find all employees in
department E11 except those with a job code equal to analyst, you could specify:
...
WHERE WORKDEPT = 'E11' AND NOT JOB = 'ANALYST'
When SQL evaluates search conditions that contain these connectors, it does so in
a specific order. SQL first evaluates the NOT clauses, next evaluates the AND
clauses, and then the OR clauses.
4. You could also use IN to specify this request: WHERE WORKDEPT IN ('C01', 'D11').
The parentheses determine the meaning of the search condition. In this example,
you want all rows that have a:
WORKDEPT value of E11 or E21, and
EDLEVEL value greater than 12
Your result is different. The selected rows are rows that have:
WORKDEPT = E11 and EDLEVEL > 12, or
WORKDEPT = E21, regardless of the EDLEVEL value
Four different types of joins are supported by DB2 UDB for AS/400: inner join, left
outer join, exception join, and cross join.
v An “Inner Join” returns only the rows from each table that have matching values
in the join columns. Any rows that do not have a match between the tables will
not appear in the result table.
v A “Left Outer Join” on page 76 returns values for all of the rows from the first
table (the table on the left) and the values from the second table for the rows that
match. Any rows that do not have a match in the second table will return the null
value for all columns from the second table.
v An “Exception Join” on page 77 returns only the rows from the left table that do
not have a match in the right table. Columns in the result table that come from
the right table have the null value.
v A “Cross Join” on page 78 returns a row in the result table for each combination
of rows from the tables being joined (a Cartesian Product).
Inner Join
With an inner join, column values from one row of a table are combined with
column values from another row of another (or the same) table to form a single row
of data. SQL examines both tables specified for the join to retrieve data from all the
rows that meet the search condition for the join. There are two ways of specifying
an inner join: using the JOIN syntax, and using the WHERE clause.
In this example, the join is done on the two tables using the EMPNO and
RESPEMP columns from the tables. Since only employees that have last names
starting with at least ’S’ are to be returned, this additional condition is provided in
the WHERE clause.
Suppose you want to find all employees and the projects they are currently
responsible for. You want to see those employees that are not currently in charge of
The result of this query contains some employees that do not have a project
number. They are listed in the query, but have the null value returned for their
project number.
Notes
Using the RRN scalar function to return the relative record number for a column in
the table on the right in a left outer join or exception join will return a value of 0 for
the unmatched rows.
Exception Join
An exception join returns only the records from the first table that do NOT have a
match in the second table. Using the same tables as before, return those
employees that are not responsible for any projects.
SELECT EMPNO, LASTNAME, PROJNO
FROM CORPDATA.EMPLOYEE EXCEPTION JOIN CORPDATA.PROJECT
ON EMPNO = RESPEMP
WHERE LASTNAME > 'S'
An exception join can also be written as a subquery using the NOT EXISTS
predicate. The previous query could be rewritten in the following way:
The only difference in this query is that it cannot return values from the PROJECT
table.
Cross Join
A cross join (or Cartesian Product join) will return a result table where each row
from the first table is combined with each row from the second table. The number of
rows in the result table is the product of the number of rows in each table. If the
tables involved are large, this join can take a very long time.
A cross join can be specified in two ways: using the JOIN syntax or by listing the
tables in the FROM clause separated by commas without using a WHERE clause
to supply join criteria.
The result table for either of these select statements looks like this:
Notes on Joins
When you join two or more tables:
v If there are common column names, you must qualify each common name with
the name of the table (or a correlation name). Column names that are unique do
not need to be qualified.
v If you do not list the column names you want, but instead use SELECT *, SQL
returns rows that consist of all the columns of the first table, followed by all the
columns of the second table, and so on.
v You must be authorized to select rows from each table or view specified in the
FROM clause.
v The sort sequence is applied to all character and UCS-2 graphic columns being
joined.
|
| Using Table Expressions
| You can use table expressions to specify an intermediate result table. They can be
| used in place of a view to avoid creating the view when general use of the view is
| not required. Table expressions consist of nested table expressions and common
| table expressions.
| Nested table expressions are specified within parentheses in the FROM clause. For
| example, suppose you want a result table that shows the manager number,
| department number, and maximum salary for each department. The manager
| number is in the DEPARTMENT table, the department number is in both the
| DEPARTMENT and EMPLOYEE tables, and the salaries are in the EMPLOYEE
| table. You can use a table expression in the from clause to select the maximum
| salary for each department. You add a correlation name, T2, following the nested
| table expression to name the derived table. The outer select then uses T2 to qualify
| columns that are selected from the derived table, in this case MAXSAL and
| For example, suppose you want a table that shows the minimum and maximum of
| the average salary of a certain set of departments. The first character of the
| department number has some meaning and you want to get the minimum and
| maximum for those departments that start with the letter ’D’ and those that start
| with the letter ’E’. You can use a common table expression to select the average
| salary for each department. Again, you must name the derived table; in this case,
| the name is DT. You can then specify a SELECT statement using a WHERE clause
| to restrict the selection to only the departments that begin with a certain letter.
| Specify the minimum and maximum of column AVGSAL from the derived table DT.
| Specify a UNION to get the results for the letter ’E’ and the results for the letter ’D’.
| WITH DT AS (SELECT E.WORKDEPT AS DEPTNO, AVG(SALARY) AS AVGSAL
| FROM CORPDATA.DEPARTMENT D , CORPDATA.EMPLOYEE E
| WHERE D.DEPTNO = E.WORKDEPT
| GROUP BY E.WORKDEPT)
| SELECT 'E', MAX(AVGSAL), MIN(AVGSAL) FROM DT
| WHERE DEPTNO LIKE 'E%'
| UNION
| SELECT 'D', MAX(AVGSAL), MIN(AVGSAL) FROM DT
| WHERE DEPTNO LIKE 'D%'
You can use UNION to eliminate duplicates when merging lists of values obtained
from several tables. For example, you can obtain a combined list of employee
numbers that includes:
v People in department D11
v People whose assignments include projects MA2112, MA2113, and AD3111
The combined list is derived from two tables and contains no duplicates. To do this,
specify:
MOVE 'D11' TO WORK-DEPT.
...
EXEC SQL
DECLARE XMP6 CURSOR FOR
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = :WORK-DEPT
UNION
SELECT EMPNO
FROM CORPDATA.EMP_ACT
WHERE PROJNO = 'MA2112' OR
PROJNO = 'MA2113' OR
PROJNO = 'AD3111'
ORDER BY EMPNO
END-EXEC.
...
EXEC SQL
FETCH XMP6
INTO :EMP-NUMBER
END-EXEC.
To better understand what results from these SQL statements, imagine that SQL
goes through the following process:
fetch EMP-NUMBER
1 000060
2 000150
3 000160
4 000170
5 000180
... ...
RV3W186-0
If the result columns are unnamed, use numbers to order the result. The number
refers to the position of the expression in the list of expressions you include in
your subselects.
SELECT A + B ...
UNION SELECT X ... ORDER BY 1
For information on compatibility of the length and data type for columns in a
UNION, see chapter 4 of the DB2 UDB for AS/400 SQL Reference book.
Note: Sort sequence is applied after the fields across the UNION pieces are made
compatible. The sort sequence is used for the distinct processing that
implicitly occurs during UNION processing.
fetch EMP-NUMBER
1 000060
2 000150
3 000150
4 000150
5 000160
6 000160
7 000170
8 000170
... ...
RV3W187-0
Using Subqueries
In the WHERE and HAVING clauses you have seen so far, you specified a search
condition by using a literal value, a column name, an expression, or the registers. In
those search conditions, you know that you are searching for a specific value, but
sometimes you cannot supply that value until you have retrieved other data from a
table. For example, suppose you want a list of the employee numbers, names, and
job codes of all employees working on a particular project, say project number
MA2100. The first part of the statement is easy to write:
DECLARE XMP CURSOR FOR
SELECT EMPNO, LASTNAME, JOB
FROM CORPDATA.EMPLOYEE
WHERE EMPNO ...
But you cannot go further because the CORPDATA.EMPLOYEE table does not
include project number data. You do not know which employees are working on
project MA2100 without issuing another SELECT statement against the
CORPDATA.EMP_ACT table.
With SQL, you can nest one SELECT statement within another to solve this
problem. The inner SELECT statement is called a subquery. The SELECT
statement surrounding the subquery is called the outer-level SELECT. Using a
subquery, you could issue just one SQL statement to retrieve the employee
numbers, names, and job codes for employees who work on project MA2100:
DECLARE XMP CURSOR FOR
SELECT EMPNO, LASTNAME, JOB
FROM CORPDATA.EMPLOYEE
WHERE EMPNO IN
(SELECT EMPNO
FROM CORPDATA.EMP_ACT
WHERE PROJNO = 'MA2100')
To better understand what will result from this SQL statement, imagine that SQL
goes through the following process:
000010
000110
Step 2: The interim results The final result table looks like this:
table then serves as a list
in the search condition of
the outer-level SELECT.
Essentially, this is what is
executed.
RV2W559-2
Correlation
The purpose of a subquery is to supply information needed to qualify a row
(WHERE clause) or a group of rows (HAVING clause). This is done through the
result table that the subquery produces. Conceptually, the subquery is evaluated
whenever a new row or group of rows must be qualified. In fact, if the subquery is
the same for every row or group, it is evaluated only once. For example, the
previous subquery has the same content for every row of the table
CORPDATA.EMPLOYEE. Subqueries like this are said to be uncorrelated.
Some subqueries vary in content from row to row or group to group. The
mechanism that allows this is called correlation, and the subqueries are said to be
correlated. More information on correlated subqueries can be found in “Correlated
Subqueries” on page 88. Even so, what is said before that point applies equally to
correlated and uncorrelated subqueries.
Subqueries can also appear in the search conditions of other subqueries. Such
subqueries are said to be nested at some level of nesting. For example, a
subquery within a subquery within an outer-level SELECT is nested at a nesting
level of two. SQL allows nesting down to a nesting level of 32, but few queries
require a nesting level greater than 1.
Basic Comparisons
You can use a subquery immediately after any of the comparison operators. If you
do, the subquery can return at most one value. The value can be the result of a
column function or an arithmetic expression. SQL then compares the value that
results from the subquery with the value to the left of the comparison operator. For
example, suppose you want to find the employee numbers, names, and salaries for
employees whose education level is higher than the average education level
throughout the company.
DECLARE XMP CURSOR FOR
SELECT EMPNO, LASTNAME, SALARY
FROM CORPDATA.EMPLOYEE
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM CORPDATA.EMPLOYEE)
SQL first evaluates the subquery and then substitutes the result in the WHERE
clause of the SELECT statement. In this example, the result is (as it should be) the
company-wide average educational level. Besides returning a single value, a
subquery could return no value at all. If it does, the result of the compare is
unknown. Consider, for example, the first query shown in this section, and assume
that there are not any employees currently working on project MA2100. Then the
subquery would return no value, and the search condition would be unknown for
every row. In this case, then, the result produced by the query would be an empty
table.
To satisfy this WHERE clause, the value in the expression must be greater than
all the values (that is, greater than the highest value) returned by the subquery. If
the subquery returns an empty set (that is, no values were selected), the
condition is satisfied.
To satisfy this WHERE clause, the value in the expression must be greater than
at least one of the values (that is, greater than the lowest value) returned by the
subquery. If what the subquery returns is empty, the condition is not satisfied.
Note: The results when a subquery returns one or more null values may surprise
you, unless you are familiar with formal logic. For applicable rules, read the
discussion of quantified predicates in the DB2 UDB for AS/400 SQL
Reference .
In the example, the search condition holds if any project represented in the
CORPDATA.PROJECT table has an estimated start date that is later than January
1, 1982. Please note that this example does not show the full power of EXISTS,
because the result is always the same for every row examined for the outer-level
SELECT. As a consequence, either every row appears in the results, or none
appear. In a more powerful example, the subquery itself would be correlated, and
would change from row to row. See “Correlated Subqueries” on page 88 for more
information on correlated subqueries.
As shown in the example, you do not need to specify column names in the
subquery of an EXISTS clause. Instead, you can code SELECT *.
You could also use the EXISTS keyword with the NOT keyword in order to select
rows when the data or condition you specify does not exist. That is, you could use:
... WHERE NOT EXISTS (SELECT ...)
For all general types of usage for subqueries but one (using a subquery with the
EXISTS keyword), the subquery must produce a one-column result table. This
The result table produced by a subquery can have zero or more rows. For some
usages, no more than one row is allowed.
Correlated Subqueries
In the subqueries previously discussed, SQL evaluates the subquery once,
substitutes the result of the subquery in the right side of the search condition, and
evaluates the outer-level SELECT based on the value of the search condition. You
can also write a subquery that SQL may have to re-evaluate as it examines each
A correlated subquery looks like an uncorrelated one, except for the presence of
one or more correlated references. In the example, the single correlated reference
is the occurrence of X.WORKDEPT in the subselect’s FROM clause. Here, the
qualifier X is the correlation name defined in the FROM clause of the outer SELECT
statement. In that clause, X is introduced as the correlation name of the table
CORPDATA.EMPLOYEE.
Now, consider what happens when the subquery is executed for a given row of
CORPDATA.EMPLOYEE. Before it is executed, the occurrence of X.WORKDEPT is
replaced with the value of the WORKDEPT column for that row. Suppose, for
example, that the row is for CHRISTINE I HAAS. Her work department is A00,
which is the value of WORKDEPT for this row. The subquery executed for this row
is:
(SELECT AVG(EDLEVEL)
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'A00')
Thus, for the row considered, the subquery produces the average education level of
Christine’s department. This is then compared in the outer statement to Christine’s
own education level. For some other row for which WORKDEPT has a different
value, that value appears in the subquery in place of A00. For example, for the row
for MICHAEL L THOMPSON, this value would be B01, and the subquery for his row
would deliver the average education level for department B01.
The result table produced by the query would have the following values:
RV2W560-3
Consider what happens when the subquery is executed for a given department of
CORPDATA.EMPLOYEE. Before it is executed, the occurrence of X.WORKDEPT is
replaced with the value of the WORKDEPT column for that group. Suppose, for
example, that the first group selected has A00 for the value of WORKDEPT. The
subquery executed for this group is:
(SELECT AVG(SALARY)
FROM CORPDATA.EMPLOYEE
WHERE SUBSTR('A00',1,1) = SUBSTR(WORKDEPT,1,1))
Thus, for the group considered, the subquery produces the average salary for the
area. This is then compared in the outer statement to the average salary for
department 'A00'. For some other group for which WORKDEPT is ’B01’, the
subquery would result in the average salary for the area where department B01
belongs.
The result table produced by the query would have the following values:
The correlation name is defined in the FROM clause of some query. This query
could be the outer-level SELECT, or any of the subqueries that contain the one with
the reference. Suppose, for example, that a query contains subqueries A, B, and C,
and that A contains B and B contains C. Then a correlation name used in C could
be defined in B, A, or the outer-level SELECT.
You can define a correlation name for each table name appearing in a FROM
clause. Simply include the correlation names after the table names. Leave one or
more blanks between a table name and its correlation name, and place a comma
after the correlation name if it is followed by another table name. The following
FROM clause, for example, defines the correlation names TA and TB for the tables
TABLEA and TABLEB, and no correlation name for the table TABLEC.
FROM TABLEA TA, TABLEC, TABLEB TB
Before the subquery is executed, a value from the referenced column is always
substituted for the correlated reference. The value is determined as follows:
Note: Use D to designate the query in which the correlation name is defined. Then
the subquery is either in the WHERE clause of D, or in its HAVING clause.
v If the subquery is in the WHERE clause, its results are used by D to qualify a
row. The substituted value is then taken from this row. This is the case for the
example, where the defining query is the outer one and the subquery appears in
the outer query’s WHERE clause.
v If the subquery is in the HAVING clause, its results are used by D to qualify a
group of rows. The substituted value is then taken from this group. Note that in
this case, the column specified must be identified in the GROUP BY clause in D.
If it is not, the specified column could have more than one value for the group.
SQL determines, for each row in the CORPDATA.EMP_ACT table, whether a row
with the same project number exists in the CORPDATA.PROJECT table. If not, the
CORPDATA.EMP_ACT row is deleted.
You can add, change, or drop columns and add or remove constraints all with one
ALTER TABLE statement. However, a single column can be referenced only once in
the ADD COLUMN, ALTER COLUMN, and DROP COLUMN clauses. That is, you
cannot add a column and then alter that column in the same ALTER TABLE
statement.
Adding a column
| You can add a column to a table using Operations Navigator. Or use the ADD
| COLUMN clause of the SQL ALTER TABLE statement.
When you add a new column to a table, the column is initialized with its default
value for all existing rows. If NOT NULL is specified, a default value must also be
specified.
| The altered table may consist of up to 8000 columns. The sum of the byte counts of
| the columns must not be greater than 32766 or, if a VARCHAR or VARGRAPHIC
| column is specified, 32740. If a LOB column is specified, the sum of record data
| byte counts of the columns must not be greater than 15 728 640.
Changing a column
| You can change a column in a table using Operations Navigator. Or, you can use
| the ALTER COLUMN clause of the ALTER TABLE statement. When you change the
| data type of an existing column, the old and new attributes must be compatible.
| “Allowable Conversions” shows the conversions with compatible attributes.
| When you convert to a data type with a longer length, data will be padded with the
| appropriate pad character. When you convert to a data type with a shorter length,
| data may be lost due to truncation. An inquiry message prompts you to confirm the
| request.
| If you have a column that does not allow the null value and you want to change it to
| now allow the null value, use the DROP NOT NULL clause. If you have a column
| that allows the null value and you want to prevent the use of null values, use the
| SET NOT NULL clause. If any of the existing values in that column are the null
| value, the ALTER TABLE will not be performed and an SQLCODE of -190 will
| result.
Allowable Conversions
Table 15. Allowable Conversions
FROM data type TO data type
Decimal Numeric
| When modifying an existing column, only the attributes that you specify will be
| changed. All other attributes will remain unchanged. For example, given the
| following table definition:
| CREATE TABLE EX1 (COL1 CHAR(10) DEFAULT 'COL1',
| COL2 VARCHAR(20) ALLOCATE(10) CCSID 937,
| COL3 VARGRAPHIC(20) ALLOCATE(10)
| NOT NULL WITH DEFAULT)
COL2 would still have an allocated length of 10 and CCSID 937, and COL3 would
still have an allocated length of 10.
Dropping a column deletes that column from the table definition. If CASCADE is
specified, any views, indexes, and constraints dependent on that column will also
be dropped. If RESTRICT is specified, and any views, indexes, or constraints are
dependent on the column, the column will not be dropped and SQLCODE of -196
will be issued.
| Within each of these steps, the order in which you specify the clauses is the order
in which they are performed, with one exception. If any columns are being dropped,
that operation is logically done before any column definitions are added or altered,
in case record length is increased as a result of the ALTER TABLE statement.
For example, to create a view that selects only the last name and the department of
all the managers, specify:
CREATE VIEW CORPDATA.EMP_MANAGERS AS
SELECT LASTNAME, WORKDEPT FROM CORPDATA.EMPLOYEE
WHERE JOB = 'MANAGER'
If the select list contains elements other than columns such as expressions,
functions, constants, or special registers, and the AS clause was not used to name
the columns, a column list must be specified for the view. In the following example,
the columns of the view are LASTNAME and YEARSOFSERVICE.
CREATE VIEW CORPDATA.EMP_YEARSOFSERVICE
(LASTNAME, YEARSOFSERVICE) AS
SELECT LASTNAME, YEARS (CURRENT DATE - HIREDATE)
FROM CORPDATA.EMPLOYEE
The previous view can also be defined by using the AS clause in the select list to
name the columns in the view. For example:
CREATE VIEW CORPDATA.EMP_YEARSOFSERVICE AS
SELECT LASTNAME,
YEARS (CURRENT_DATE - HIREDATE) AS YEARSOFSERVICE
FROM CORPDATA.EMPLOYEE
Views are created with the sort sequence in effect at the time the CREATE VIEW
statement is run. The sort sequence applies to all character and UCS-2 graphic
comparisons in the CREATE VIEW statement subselect. See “Using Sort Sequence
in SQL” on page 50 for more information on sort sequences.
Views can also be created using the WITH CHECK OPTION to specify the level of
checking that should be done when data is inserted or updated through the view.
See “WITH CHECK OPTION on a View” on page 108 for more information.
Adding Indexes
| You can use indexes to sort and select data. In addition, indexes help the system
| retrieve data faster for better query performance.
| You can create an index when creating a table using Operations Navigator. Or use
| the SQL CREATE INDEX statement. The following example creates an index over
| the column LASTNAME in the CORPDATA.EMPLOYEE table:
| CREATE INDEX CORPDATA.INX1 ON CORPDATA.EMPLOYEE (LASTNAME)
| A new type of index, the encoded vector index, allows for faster scans that can be
| more easily processed in parallel. You create encoded vector indexes by using the
| SQL CREATE INDEX statement. For more information about accelerating your
| queries with encoded vector indexes , go to the DB2 for AS/400 webpages.
| If an index is created that has exactly the same attributes as an existing index, the
| new index shares the existing indexes’ binary tree. Otherwise, another binary tree is
| created. If the attributes of the new index are exactly the same as another index,
| except the new index has fewer columns, another binary tree is still created. It is
| still created because the extra columns would prevent the index from being used by
| cursors or UPDATE statements which update those extra columns.
Indexes are created with the sort sequence in effect at the time the CREATE
INDEX statement is run. The sort sequence applies to all SBCS character fields
and UCS-2 graphic fields of the index. See “Using Sort Sequence in SQL” on page
50 for more information on sort sequences.
As the following examples show, you can display catalog information. You cannot
INSERT, DELETE, or UPDATE catalog information. You must have SELECT
privileges on the catalog views to run the following examples.
Attention: Operations that normally update the SQL catalog for a collection can no
longer update the catalog if the collection is saved, restored, and given a different
name. Saving in one collection and restoring to another is not supported by the
product.
The following sample statement displays all the column names in the
CORPDATA.DEPARTMENT table:
SELECT *
FROM CORPDATA.SYSCOLUMNS
WHERE TBNAME = 'DEPARTMENT'
The result of the previous sample statement is a row of information for each column
in the table. Some of the information is not visible because the width of the
information is wider than the display screen.
For more information about each column, specify a select-statement like this:
SELECT NAME, TBNAME, COLTYPE, LENGTH, DEFAULT
FROM CORPDATA.SYSCOLUMNS
WHERE TBNAME = 'DEPARTMENT'
In addition to the column name for each column, the select-statement shows:
v The name of the table that contains the column
v The data type of the column
v The length attribute of the column
v If the column allows default values
This chapter describes the different ways the system automatically enforces these
kinds of relationships. Referential integrity, check constraints, and triggers are all
ways of accomplishing data integrity. Additionally, the WITH CHECK OPTION
clause on a CREATE VIEW constrains the inserting or updating of data through a
view.
For comprehensive information about data integrity, see the DB2 UDB for AS/400
Database Programming book.
In this example, the following statement creates a table with three columns and a
check constraint over COL2 which limits the values allowed in that column to
positive integers:
CREATE TABLE T1 (COL1 INT, COL2 INT CHECK (COL2>0), COL3 INT)
would fail because the value to be inserted into COL2 does not meet the check
constraint; that is, -1 is not greater than 0.
This ALTER TABLE statement attempts to add a second check constraint which
limits the value allowed in COL1 to 1 and also effectively rules that values in COL2
be greater than 1. This constraint would not be be allowed because the second part
of the constraint is not met by the existing data (the value of ’1’ in COL2 is not less
than the value of ’1’ in COL1).
Referential Integrity
| Referential integrity is the condition of a set of tables in a database in which all
| references from one table to another are valid.
| Stated another way, referential integrity is the state of a database in which all
| values of all foreign keys are valid. Each value of the foreign key must also exist in
| the parent key or be null. This definition of referential integrity requires an
| understanding of the following terms:
| v A unique key is a column or set of columns in a table which uniquely identify a
| row. Although a table can have several unique keys, no two rows in a table can
| have the same unique key value.
| v A primary key is a unique key that does not allow nulls. A table cannot have
| more than one primary key.
| v A parent key is either a unique key or a primary key which is referenced in a
| referential constraint.
| v A foreign key is a column or set of columns whose values must match those of a
| parent key. If any column value used to build the foreign key is null, then the rule
| does not apply.
| v A parent table is a table that contains the parent key.
| v A dependent table is the table that contains the foreign key.
| v A descendent table is a table that is a dependent table or a descendent of a
| dependent table.
| Enforcement of referential integrity prevents the violation of the rule which states
that every non-null foreign key must have a matching parent key.
SQL supports the referential integrity concept with the CREATE TABLE and ALTER
TABLE statements. For detailed descriptions of these commands, see the DB2 UDB
for AS/400 SQL Reference book.
| You can add referential constraints using Operations Navigator when creating a
| table. Or, use the SQL CREATE TABLE and ALTER TABLE statements to add or
| change referential constraints.
| With a referential constraint, non-null values of the foreign key are valid only if they
| also appear as values of a parent key. When you define a referential constraint, you
| specify:
| v A primary or unique key
| Optionally, you can specify a name for the constraint. If a name is not specified,
| one is automatically generated.
| Once a referential constraint is defined, the system enforces the constraint on every
| INSERT, DELETE, and UPDATE operation performed through SQL or any other
| interface including Operations Navigator, CL commands, utilities, or high-level
| language statements.
In this case, the DEPARTMENT table has a column of unique department numbers
(DEPTNO) which functions as a primary key, and is a parent table in two constraint
relationships:
REPORTS_TO_EXISTS
is a self-referencing constraint in which the DEPARTMENT table is both the
parent and the dependent in the same relationship. Every non-null value of
ADMRDEPT must match a value of DEPTNO. A department must report to
an existing department in the database. The DELETE CASCADE rule
indicates that if a row with a DEPTNO value n is deleted, every row in the
table for which the ADMRDEPT is n is also deleted.
WORKDEPT_EXISTS
establishes the EMPLOYEE table as a dependent table, and the column of
DROP TABLE and DROP COLLECTION statements also remove any constraints
on the table or collection being dropped.
If you are inserting data into a dependent table with foreign keys:
v Each non-null value you insert into a foreign key column must be equal to some
value in the corresponding parent key of the parent table.
v If any column in the foreign key is null, the entire foreign key is considered null. If
all foreign keys that contain the column are null, the INSERT succeeds (as long
as there are no unique index violations).
Notice that the parent table columns are not specified in the REFERENCES clause.
The columns are not required to be specified as long as the referenced table has a
primary key or eligible unique key which can be used as the parent key.
Every row inserted into the PROJECT table must have a value of DEPTNO that is
equal to some value of DEPTNO in the department table. (The null value is not
allowed because DEPTNO in the project table is defined as NOT NULL.) The row
must also have a value of RESPEMP that is either equal to some value of EMPNO
in the employee table or is null.
The tables with the sample data as they appear in Appendix A. DB2 UDB for
AS/400 Sample Tables conform to these constraints. The following INSERT
statement fails because there is no matching DEPTNO value (’A01’) in the
DEPARTMENT table.
INSERT INTO CORPDATA.PROJECT (PROJNO, PROJNAME, DEPTNO, RESPEMP)
VALUES ('AD3120', 'BENEFITS ADMIN', 'A01', '000010')
Update Rules
The action taken on dependent tables when an UPDATE is performed on a parent
table depends on the update rule specified for the referential constraint. If no
update rule was defined for a referential constraint, the UPDATE NO ACTION rule
is used.
v UPDATE NO ACTION
The subtle difference between RESTRICT and NO ACTION rules is easiest seen
when looking at the interaction of triggers and referential constraints. Triggers can
be defined to fire either before or after an operation (an UPDATE statement, in this
case). A before trigger fires before the UPDATE is performed and therefore before
any checking of constraints. An after trigger is fired after the UPDATE is performed,
and after a constraint rule of RESTRICT (where checking is performed
immediately), but before a constraint rule of NO ACTION (where checking is
performed at the end of the statement). The triggers and rules would occur in the
following order:
1. A before trigger would be fired before the UPDATE and before a constraint rule
of RESTRICT or NO ACTION.
2. An after trigger would be fired after a constraint rule of RESTRICT, but before a
NO ACTION rule.
If you are updating a dependent table, any non-null foreign key values that you
change must match the primary key for each relationship in which the table is a
dependent. For example, department numbers in the employee table depend on the
department numbers in the department table. You can assign an employee to no
department (the null value), but not to a department that does not exist.
If an UPDATE against a table with a referential constraint fails, all changes made
during the update operation are undone. For more information on the implications of
commitment control and journaling when working with constraints, see “Journaling”
on page 368 and “Commitment Control” on page 369.
The following UPDATE fails because the PROJECT table has rows which are
dependent on DEPARTMENT.DEPTNO having a value of ’D01’ (the row targeted by
the WHERE statement). If this UPDATE were allowed, the referential constraint
between the PROJECT and DEPARTMENT tables would be broken.
UPDATE CORPDATA.DEPARTMENT
SET DEPTNO = 'D99'
WHERE DEPTNAME = 'DEVELOPMENT CENTER'
The following statement fails because it violates the referential constraint that exists
between the primary key DEPTNO in DEPARTMENT and the foreign key DEPTNO
in PROJECT:
UPDATE CORPDATA.PROJECT
SET DEPTNO = 'D00'
WHERE DEPTNO = 'D01';
When running this statement with a program, the number of rows deleted is
returned in SQLERRD(3) in the SQLCA. This number includes only the number of
rows deleted in the table specified in the DELETE statement. It does not include
those rows deleted according to the CASCADE rule. SQLERRD(5) in the SQLCA
contains the number of rows that were affected by referential constraints in all
tables.
The subtle difference between RESTRICT and NO ACTION rules is easiest seen
when looking at the interaction of triggers and referential constraints. Triggers can
be defined to fire either before or after an operation (a DELETE statement, in this
case). A before trigger fires before the DELETE is performed and therefore before
any checking of constraints. An after trigger is fired after the DELETE is performed,
and after a constraint rule of RESTRICT (where checking is performed
immediately), but before a constraint rule of NO ACTION (where checking is
performed at the end of the statement). The triggers and rules would occur in the
following order:
1. A before trigger would be fired before the DELETE and before a constraint rule
of RESTRICT or NO ACTION.
2. An after trigger would be fired after a constraint rule of RESTRICT, but before a
NO ACTION rule.
Given the tables and the data as they appear in Appendix A. DB2 UDB for AS/400
Sample Tables, one row is deleted from table DEPARTMENT, and table
EMPLOYEE is updated to set the value of WORKDEPT to its default wherever the
value was ’E11’. A question mark (’?’) in the sample data below reflects the null
value. The results would appear as follows:
Table 16. DEPARTMENT Table. Contents of the table after the DELETE statement is
complete.
DEPTNO DEPTNAME MGRNO ADMRDEPT
A00 SPIFFY COMPUTER SERVICE DIV. 000010 A00
B01 PLANNING 000020 A00
C01 INFORMATION CENTER 000030 A00
D01 DEVELOPMENT CENTER ? A00
D11 MANUFACTURING SYSTEMS 000060 D01
D21 ADMINISTRATION SYSTEMS 000070 D01
E01 SUPPORT SERVICES 000050 A00
E21 SOFTWARE SUPPORT 000100 E01
Note that there were no cascaded deletes in the DEPARTMENT table because no
department reported to department ’E11’.
Check Pending
Referential constraints and check constraints can be in a state known as check
pending, where potential violations of the constraint exist. For referential constraints,
a violation occurs when potential mismatches exist between parent and foreign
keys. For check constraints, a violation occurs when potential values exist in
columns which are limited by the check constraint. When the system determines
that the constraint may have been violated (such as after a restore operation), the
constraint is marked as check pending. When this happens, restrictions are placed
on the use of tables involved in the constraint. For referential constraints, the
following restrictions apply:
v No input or output operations are allowed on the dependent file.
v Only read and insert operations are allowed on the parent file.
For more information on working with tables in check pending, see the DB2 UDB
for AS/400 Database Programming book.
WITH CHECK OPTION cannot be specified if the view is read-only. The definition of
the view must not include a subquery.
If the view is created without a WITH CHECK OPTION clause, insert and update
operations that are performed on the view are not checked for conformance to the
view definition. Some checking might still occur if the view is directly or indirectly
dependent on another view that includes WITH CHECK OPTION. Because the
definition of the view is not used, rows might be inserted or updated through the
view that do not conform to the definition of the view. This means that the rows
could not be selected again using the view.
The checking can either be CASCADED or LOCAL. See the DB2 UDB for AS/400
SQL Reference book for additional discussion of WITH CHECK OPTION.
The following INSERT statement fails because it would produce a row that does not
conform to the definition of V2:
INSERT INTO V2 VALUES (5)
The following INSERT statement fails only because V3 is dependent on V2, and V2
has a WITH CASCADED CHECK OPTION.
INSERT INTO V3 VALUES (5)
For example, consider the same updateable view used in the previous example:
CREATE VIEW V1 AS SELECT COL1
FROM T1 WHERE COL1 > 10
Create second view over V1, this time specifying WITH LOCAL CHECK OPTION:
CREATE VIEW V2 AS SELECT COL1
FROM V1 WITH LOCAL CHECK OPTION
The same INSERT that failed in the previous CASCADED CHECK OPTION
example would succeed now because V2 does not have any search conditions, and
the search conditions of V1 do not need to be checked since V1 does not specify a
check option.
INSERT INTO V2 VALUES (5)
The difference between LOCAL and CASCADED CHECK OPTION lies in how
many of the dependent views’ search conditions are checked when a row is
inserted or updated.
Example
Use the following table and views:
CREATE TABLE T1 (COL1 CHAR(10))
In DB2 UDB for AS/400, the program containing the set of trigger actions can be
defined in any supported high level language. The trigger program can have SQL
embedded in it. To use trigger support, you must create a trigger program and add
it to a physical file using the ADDPFTRG CL command. To add a trigger to a file,
you must:
v Identify the physical file
v Identify the kind of operation
v Identify the program that performs the desired actions.
There is no SQL statement to associate a physical file with a trigger program. SQL
is only involved in that the trigger program can contain embedded SQL statements,
and that it could be an SQL INSERT, UPDATE, or DELETE that causes the trigger
to be fired.
Once a trigger program is associated with a physical file, the system trigger support
calls the trigger program when a change operation is initiated against the physical
file or table, or any logical file or view created over the physical file.
Each change operation can call a trigger before or after the change operation
occurs. Thus, a physical file can be associated with a maximum of six triggers
v Before delete trigger
v Before insert trigger
v Before update trigger
v After delete trigger
v After insert trigger
v After update trigger
Trigger Sample
A sample trigger program follows. It is written in ILE C, with embedded SQL.
See the DB2 UDB for AS/400 Database Programming book for a full discussion and
more examples of trigger usage in DB2 UDB for AS/400.
Qdb_Trigger_Buffer_t *hstruct;
char *datapt;
/*******************************************************/
/* Structure of the EMPLOYEE record which is used to */
/* store the old or the new record that is passed to */
/* this trigger program. */
/* */
/* Note : You must ensure that all the numeric fields */
/* are aligned at 4 byte boundary in C. */
/* Used either Packed struct or filler to reach */
/* the byte boundary alignment. */
/*******************************************************/
struct {
char empno[6];
char name[30];
decimal(9,2) salary;
decimal(9,2) new_salary;
} rpt1;
/*******************************************************/
/* Start to monitor any exception. */
/*******************************************************/
_FEEDBACK fc;
_HDLR_ENTRY hdlr = main_handler;
/****************************************/
/* Make the exception handler active. */
/****************************************/
CEEHDLR(&hdlr, NULL, &fc);
/****************************************/
/* Ensure exception handler OK */
/****************************************/
if (fc.MsgNo != CEE0000)
{
printf("Failed to register exception handler.\n");
exit(99);
};
/*******************************************************/
/* Move the data from the trigger buffer to the local */
/* structure for reference. */
/*******************************************************/
/*******************************************************/
/* Set the transaction isolation level to the same as */
/* the application based on the input parameter in the */
/* trigger buffer. */
/*******************************************************/
if(strcmp(hstruct->Commit_Lock_Level,"0") == 0)
EXEC SQL SET TRANSACTION ISOLATION LEVEL NONE;
else{
if(strcmp(hstruct->Commit_Lock_Level,"1") == 0)
EXEC SQL SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED, READ
WRITE;
else {
if(strcmp(hstruct->Commit_Lock_Level,"2") == 0)
EXEC SQL SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
else
if(strcmp(hstruct->Commit_Lock_Level,"3") == 0)
EXEC SQL SET TRANSACTION ISOLATION LEVEL ALL;
}
}
/********************************************************/
/* If the employee's commission is greater than maximum */
/* commission, then increase the employee's salary */
/* by 1.04 percent and insert into the RAISE table. */
/********************************************************/
if (sqlca.sqlcode == 0) then
{
rpt1.new_salary = salary * percentage;
EXEC SQL INSERT INTO TRGPERF/RAISE VALUES(:rpt1);
}
goto finished;
}
err_exit:
exit(1);
/* All done */
finished:
return;
} /* end of main line */
error_code.bytes_provided = 15;
/****************************************/
/* Set the error handler to resume and */
/* mark the last escape message as */
/* handled. */
/****************************************/
*rc = CEE_HDLR_RESUME;
/****************************************/
/* Send my own *ESCAPE message. */
/****************************************/
QMHSNDPM(message_id,
&message_file,
&message_data,
message_len,
message_type,
message_q,
pgm_stack_cnt,
&message_key,
&error_code );
/****************************************/
/* Check that the call to QMHSNDPM */
/* finished correctly. */
/****************************************/
if (error_code.bytes_available != 0)
{
printf("Error in QMHOVPM : %s\n", error_code.message_id);
}
}
Coding stored procedures requires that the user understand the following:
v Stored procedure definition through the CREATE PROCEDURE statement
v Stored procedure invocation through the CALL statement
v Parameter passing conventions
v Methods for returning a completion status to the program invoking the procedure.
| You may define stored procedures by using the CREATE PROCEDURE statement.
| The CREATE PROCEDURE statement adds procedure and parameter definitions to
| the catalog tables SYSROUTINES and SYSPARMS. These definitions are then
| accessible by any SQL CALL statement on the system.
The following sections describe the SQL statements used to define and invoke the
stored procedure, information on passing parameters to the stored procedure, and
examples of stored procedure usage.
Creating a Procedure
A procedure (often called a stored procedure) is a program that can be called to
perform operations that can include both host language statements and SQL
statements. Procedures in SQL provide the same benefits as procedures in a host
language. That is, a common piece of code need only be written and maintained
once and can be called from several programs.
To create an external procedure or an SQL procedure, you can use the SQL
CREATE PROCEDURE statement. Or, you can use Operations Navigator.
Consider the following simple example that takes as input an employee number and
a rate and updates the employee’s salary:
EXEC SQL CREATE PROCEDURE UPDATE_SALARY_1
(IN EMPLOYEE_NUMBER CHAR(10),
IN RATE DECIMAL(6,2))
Instead of a single UPDATE statement, logic can be added to the SQL procedure
using SQL control statements. SQL control statements consist of the following:
v an assignment statement
v a CALL statement
v a CASE statement
v a compound statement
v a FOR statement
v an IF statement
v a LOOP statement
v a REPEAT statement
v a WHILE statement
| The following example takes as input the employee number and a rating that was
| received on the last evaluation. The procedure uses a CASE statement to
| determine the appropriate increase and bonus for the update:
| EXEC SQL CREATE PROCEDURE UPDATE_SALARY_2
| (IN EMPLOYEE_NUMBER CHAR(6),
| IN RATING INT)
| LANGUAGE SQL MODIFIES SQL DATA
| CASE RATING
| WHEN 1 THEN
| UPDATE CORPDATA.EMPLOYEE
| SET SALARY = SALARY * 1.10,
| BONUS = 1000
| WHERE EMPNO = EMPLOYEE_NUMBER;
| WHEN 2 THEN
| UPDATE CORPDATA.EMPLOYEE
| SET SALARY = SALARY * 1.05,
| BONUS = 500
| WHERE EMPNO = EMPLOYEE_NUMBER;
| ELSE
| UPDATE CORPDATA.EMPLOYEE
| SET SALARY = SALARY * 1.03
| BONUS = 0
| WHERE EMPNO = EMPLOYEE_NUMBER;
| END CASE;
The following example takes as input the department number. It returns the total
salary of all the employees in that department and the number of employees in that
department who get a bonus.
EXEC SQL
CREATE PROCEDURE RETURN_DEPT_SALARY
(IN DEPT_NUMBER CHAR(3),
OUT DEPT_SALARY DECIMAL(15,2),
OUT DEPT_BONUS_CNT INT)
LANGUAGE SQL READS SQL DATA
P1: BEGIN
DECLARE EMPLOYEE_SALARY DECIMAL(9,2);
DECLARE EMPLOYEE_BONUS DECIMAL(9,2);
DECLARE TOTAL_SALARY DECIMAL(15,2);
DECLARE BONUS_CNT INT DEFAULT 0;
DECLARE END_TABLE INT DEFAULT 0;
DECLARE C1 CURSOR FOR
SELECT SALARY, BONUS FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = DEPT_NUMBER;
DECLARE CONTINUE HANDLER FOR NOT FOUND
SET END_TABLE = 1;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
SET DEPT_SALARY = NULL;
OPEN C1;
FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS;
WHILE END_TABLE = 0 DO
SET TOTAL_SALARY = TOTAL_SALARY + EMPLOYEE_SALARY + EMPLOYEE_BONUS;
IF EMPLOYEE_BONUS > 0 THEN
SET BONUS_CNT = BONUS_CNT + 1;
END IF;
FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS;
END WHILE;
CLOSE C1;
SET DEPT_SALARY = TOTAL_SALARY;
SET DEPT_BONUS_CNT = BONUS_CNT;
END P1;
The following example takes as input the department number. It ensures the
EMPLOYEE_BONUS table exists, and inserts the name of all employees in the
department who get a bonus. The procedure returns the total count of all
employees who get a bonus.
EXEC SQL
CREATE PROCEDURE CREATE_BONUS_TABLE
(IN DEPT_NUMBER CHAR(3),
INOUT CNT INT)
LANGUAGE SQL MODIFIES SQL DATA
CS1: BEGIN ATOMIC
DECLARE NAME VARCHAR(30) DEFAULT NULL;
DECLARE CONTINUE HANDLER FOR 42710
SELECT COUNT(*) INTO CNT
FROM DATALIB.EMPLOYEE_BONUS;
DECLARE CONTINUE HANDLER FOR 23505
SET CNT = CNT + 1;
DECLARE UNDO HANDLER FOR SQLEXCEPTION
SET CNT = NULL;
IF DEPT_NUMBER IS NOT NULL THEN
CREATE TABLE DATALIB.EMPLOYEE_BONUS
(FULLNAME VARCHAR(30),
BONUS DECIMAL(10,2))
PRIMARY KEY (FULLNAME);
FOR_1:FOR V1 AS C1 CURSOR FOR
SELECT FIRSTNME, MIDINIT, LASTNAME, BONUS
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = CREATE_BONUS_TABLE.DEPT_NUMBER;
IF BONUS > 0 THEN
You can also use dynamic SQL in an SQL procedure. The following example
creates a table that contains all employees in a specific department. The
department number is passed as input to the procedure and is concatenated to the
table name.
Since the first parameter is declared as INOUT, SQL updates the host variable HV1
and the indicator variable IND1 with the values returned from MYLIB.PROC1 before
returning to the user program.
When the CALL statement is invoked, DB2 SQL for AS/400 attempts to find the
program based on standard SQL naming conventions. For the above example,
assume that the naming option of *SYS (system naming) is used and that a
DFTRDBCOL parameter was not specified on the CRTSQLPLI command. In this
case, the library list is searched for a program named P2. Since the call type is
GENERAL, no additional argument is passed to the program for indicator variables.
Note: If an indicator variable is specified on the CALL statement and its value is
less than zero when the CALL statement is executed, an error results
because there is no way to pass the indicator to the procedure.
Assuming program P2 is found in the library list, the contents of host variable HV2
are passed in to the program on the CALL and the argument returned from P2 is
mapped back to the host variable after P2 has completed execution.
It should be noted that the name of the called procedure may also be stored in a
host variable and the host variable used in the CALL statement, instead of the
hard-coded procedure name. For example:
...
main()
{
char proc_name[15];
...
strcpy (proc_name, "MYLIB.P3");
...
EXEC SQL CALL :proc_name ...;
...
}
When a host variable containing the procedure name is used in the CALL statement
and a CREATE PROCEDURE catalog definition exists, it will be used. The
procedure name cannot be specified as a parameter marker.
More examples for calling stored procedures may be found later in this chapter and
also in the DATABASE 2 Advanced Database Functions book.
Note: For this reason, it is always safer to use host variables on the CALL
statement so that the attributes of the procedure can be matched exactly and
so that characters are not lost. For dynamic SQL, host variables can be
specified for CALL statement arguments if the PREPARE and EXECUTE
statements are used to process it.
For numeric constants passed on a CALL statement, the following rules apply:
v All integer constants are passed as fullword binary integers.
v All decimal constants are passed as packed decimal values. Precision and scale
are determined based on the constant value. For instance, a value of 123.45 is
passed as a packed decimal(5,2). Likewise, a value of 001.01 is also passed
with a precision and scale of 5 and 2, respectively.
v All floating point constants are passed as double-precision floating point.
To indicate that an associated host variable contains the null value, the indicator
variable, which is a two-byte integer, is set to a negative value. A CALL statement
with indicator variables is processed as follows:
v If the indicator variable is negative, this denotes the null value. A default value is
passed for the associated host variable on the CALL and the indicator variable is
passed unchanged.
v If the indicator variable is not negative, this denotes that the host variable
contains a non-null value. In this case, the host variable and the indicator
variable are passed unchanged.
Note that these rules of processing are the same for input parameters to the
procedure as well as output parameters returned from the procedure. When
indicator variables are used with stored procedures, the correct method of coding
their handling is to check the value of the indicator variable first before using the
associated host variable.
Another method of returning a status to the SQL program issuing the CALL
statement is to send an escape message to the calling program (operating system
program QSQCALL) which invokes the procedure. The calling program that invokes
the procedure is QSQCALL. Each language has methods for signalling conditions
and sending messages. Refer to the respective language reference to determine
the proper way to signal a message. When the message is signalled, QSQCALL
turns the error into SQLCODE/SQLSTATE -443/38501.
The first example shows the calling ILE C program that uses the CREATE
PROCEDURE definitions to call the P1 and P2 procedures. Procedure P1 is written
in C and has 10 parameters. Procedure P2 is written in PL/I and also has 10
parameters.
#include <stdio.h>
#include <string.h>
#include <decimal.h>
main()
{
EXEC SQL INCLUDE SQLCA;
char PARM1[10];
signed long int PARM2;
signed short int PARM3;
float PARM4;
double PARM5;
decimal(10,5) PARM6;
struct { signed short int parm7l;
char parm7c[10];
} PARM7;
char PARM8[10]; /* FOR DATE */
char PARM9[8]; /* FOR TIME */
char PARM10[26]; /* FOR TIMESTAMP */
/***********************************************/
/* Call the PLI procedure */
/* */
/* */
/***********************************************/
/* Reset the host variables prior to making the CALL */
/* */
:
EXEC SQL CALL P2 (:PARM1, :PARM2, :PARM3,
:PARM4, :PARM5, :PARM6,
:PARM7, :PARM8, :PARM9,
:PARM10 );
if (strncmp(SQLSTATE,"00000",5))
{
/* Handle error or warning returned on CALL statement */
}
/* Process return values from the CALL. */
:
}
#include <stdio.h>
#include <string.h>
#include <decimal.h>
main(argc,argv)
int argc;
char *argv[];
{
char parm1[11];
long int parm2;
short int parm3,i,j,*ind,ind1,ind2,ind3,ind4,ind5,ind6,ind7,
ind8,ind9,ind10;
float parm4;
double parm5;
decimal(10,5) parm6;
char parm7[11];
char parm8[10];
char parm9[8];
char parm10[26];
/* *********************************************************/
/* Receive the parameters into the local variables - */
/* Character, date, time, and timestamp are passed as */
/* NUL terminated strings - cast the argument vector to */
/* the proper data type for each variable. Note that */
/* the argument vector could be used directly instead of */
/* copying the parameters into local variables - the copy */
/* is done here just to illustrate the method. */
/* *********************************************************/
/**********************************************************/
/* Copy date into local variable. */
/* Note that date and time variables are always passed in */
/* ISO format so that the lengths of the strings are */
/* known. strcpy would work here just as well. */
/**********************************************************/
strncpy(parm8,argv[8],10);
/**********************************************************/
/* Copy timestamp into local variable. */
/* IBM SQL timestamp format is always passed so the length*/
/* of the string is known. */
/**********************************************************/
strncpy(parm10,argv[10],26);
/**********************************************************/
/* The indicator array is passed as an array of short */
/* integers. There is one entry for each parameter passed */
/* on the CREATE PROCEDURE (10 for this example). */
/* Below is one way to set each indicator into separate */
/* variables. */
/**********************************************************/
ind = (short int *) argv[11];
ind1 = *(ind++);
ind2 = *(ind++);
ind3 = *(ind++);
ind4 = *(ind++);
ind5 = *(ind++);
ind6 = *(ind++);
ind7 = *(ind++);
ind8 = *(ind++);
ind9 = *(ind++);
ind10 = *(ind++);
:
/* Perform any additional processing here */
:
return;
}
/******** END OF C PROCEDURE P1 *******************************/
END CALLPROC;
The next example shows a REXX procedure called from an ILE C program.
#include <decimal.h>
#include <stdio.h>
#include <string.h>
#include <wcstr.h>
/*-----------------------------------------------------------*/
exec sql include sqlca;
exec sql include sqlda;
/* ***********************************************************/
/* Declare host variable for the CALL statement */
/* ***********************************************************/
char parm1[20];
signed long int parm2;
decimal(10,5) parm3;
double parm4;
struct { short dlen;
char dat[10];
} parm5;
wchar_t parm6[4] = { 0xC1C1, 0xC2C2, 0xC3C3, 0x0000 };
struct { short dlen;
wchar_t dat[10];
} parm7 = {0x0009, 0xE2E2,0xE3E3,0xE4E4, 0xE5E5, 0xE6E6,
0xE7E7, 0xE8E8, 0xE9E9, 0xC1C1, 0x0000 };
char parm8[10];
char parm9[8];
char parm10[26];
main()
{
if (strncpy(SQLSTATE,"00000",5))
{
/* handle error or warning returned on CALL */
:
}
:
}
/**********************************************************/
/* Parse the arguments into individual parameters */
/**********************************************************/
parse arg ar1 ar2 ar3 ar4 ar5 ar6 ar7 ar8 ar9 ar10 ar11
/**********************************************************/
/* Verify that the values are as expected */
/**********************************************************/
if ar1<>"'TestingREXX'" then signal ar1tag
if ar2<>12345 then signal ar2tag
if ar3<>5.5 then signal ar3tag
if ar4<>3e3 then signal ar4tag
if ar5<>"'parm6'" then signal ar5tag
if ar6 <>"G'AABBCC'" then signal ar6tag
if ar7 <>"G'SSTTUUVVWWXXYYZZAA'" then ,
signal ar7tag
if ar8 <> "'1994-01-01'" then signal ar8tag
if ar9 <> "'13.01.00'" then signal ar9tag
if ar10 <> "'1994-01-01-13.01.00.000000'" then signal ar10tag
if ar11 <> "+0+0+0+0+0+0+0+0+0+0" then signal ar11tag
ar1tag:
say "ar1 did not match" ar1
exit(1)
ar2tag:
say "ar2 did not match" ar2
exit(1)
:
:
Note: The use of the DB2 object-oriented mechanisms (UDTs, UDFs, and LOBs) is
not restricted to the support of object-oriented applications. Just as the C++
programming language implements all sorts of non-object-oriented
applications, the object-oriented mechanisms provided by DB2 can also
support all kinds of non-object-oriented applications. UDTs, UDFs, and LOBs
are general-purpose mechanisms that can be used to model any database
| The DB2 approach to support object extensions fits exactly into the relational
| paradigm. UDTs are data types that you define. UDT’s, like built-in types, can be
| used to describe the data that is stored in columns of tables. UDFs are functions
| that you define. UDFs, like built-in functions or operators, support the manipulation
| of UDT instances. Thus, UDT instances are stored in columns of tables and
| manipulated by UDFs in SQL queries. UDTs can be internally represented in
| different ways. LOBs are just one example of this.
Along with storing large objects (LOBs), you will also need a method to refer to,
use, and modify each LOB in the database. Each DB2 table may have a large
amount of associated LOB data. Although a single row containing one or more LOB
values cannot exceed 15 megabytes, a table may contain nearly 256 gigabytes of
LOB data. The content of the LOB column of a particular row at any point in time
has a large object value.
You can refer to and manipulate LOBs using host variables just as you would any
other data type. However, host variables use the client memory buffer which may
not be large enough to hold LOB values. Other means are necessary to manipulate
these large values. Locators are useful to identify and manipulate a large object
value at the database server and for extracting pieces of the LOB value. File
reference variables are useful for physically moving a large object value (or a large
part of it) to and from the client.
The subsections that follow discuss the topics that are introduced above in more
detail.
The three large object data types have the following definitions:
v Character Large OBjects (CLOBs) — A character string made up of single-byte
characters with an associated code page. This data type is best for holding
text-oriented information where the amount of information could grow beyond the
limits of a regular VARCHAR data type (upper limit of 32K bytes). Code page
conversion of the information is supported as well as compatibility with the other
character types.
| v Double-Byte Character Large OBjects (DBCLOBs) — A character string made up
| of double-byte characters with an associated code page. This data type is best
| for holding text-oriented information where double-byte character sets are used.
| Again, code page conversion of the information is supported as well as
| compatibility with the other double-byte character types.
v Binary Large OBjects (BLOBs) — A binary string made up of bytes with no
associated code page. This data type may be the most useful because it can
store binary data. Therefore, it is a perfect source type for use by User-defined
Distinct Types (UDTs). UDTs using BLOBs as the source type are created to
store image, voice, graphical, and other types of business or application-specific
data. For more information on UDTs, see “User-defined Distinct Types (UDT)” on
page 169.
The LOB locator is associated with a LOB value or LOB expression, not a row or
physical storage location in the database. Therefore, after selecting a LOB value
into a locator, you cannot perform an operation on the original row(s) or tables(s)
that would have any effect on the value referenced by the locator. The value
associated with the locator is valid until the unit of work ends, or the locator is
explicitly freed, whichever comes first. The FREE LOCATOR statement releases a
locator from its associated value. In a similar way, a commit or roll-back operation
frees all LOB locators associated with the transaction.
LOB locators can also be passed between DB2 and UDFs. Within the UDF, those
functions that work on LOB data are available to manipulate the LOB values using
LOB locators.
The use of the LOB value within the program can help the programmer determine
which method is best. If the LOB value is very large and is needed only as an input
value for one or more subsequent SQL statements, keep the value in a locator.
| If the program needs the entire LOB value regardless of the size, then there is no
| choice but to transfer the LOB. Even in this case, there are still three options
| available to you. You can select the entire value into a regular or file reference host
| variable. You may also select the LOB value into a locator and read it piecemeal
| from the locator into a regular host variable, as suggested in the following example.
#ifdef DB2MAC
char * bufptr;
#endif
do {
EXEC SQL FETCH c1 INTO :number, :resume :lobind; 2
if (SQLCODE != 0) break;
if (lobind < 0) {
printf ("NULL LOB indicated\n");
} else {
/* EVALUATE the LOB LOCATOR */
/* Locate the beginning of "Department Information" section */
EXEC SQL VALUES (POSSTR(:resume, 'Department Information'))
INTO :deptInfoBeginLoc;
CHECKERR ("VALUES1");
#ifdef DB2MAC
/* Need to convert the newline character for the Mac */
bufptr = &(buffer[0]);
while ( *bufptr != '\0' ) {
if ( *bufptr == 0x0A ) *bufptr = 0x0D;
bufptr++;
}
#endif
printf ("%s\n",buffer);
Data Division.
Working-Storage Section.
copy "sqlenv.cbl".
copy "sql.cbl".
copy "sqlca.cbl".
EXEC SQL BEGIN DECLARE SECTION END-EXEC. 1
01 userid pic x(8).
01 passwd.
49 passwd-length pic s9(4) comp-5 value 0.
49 passwd-name pic x(18).
01 empnum pic x(6).
01 di-begin-loc pic s9(9) comp-5.
01 di-end-loc pic s9(9) comp-5.
01 resume USAGE IS SQL TYPE IS CLOB-LOCATOR.
01 di-buffer USAGE IS SQL TYPE IS CLOB-LOCATOR.
01 lobind pic s9(4) comp-5.
01 buffer USAGE IS SQL TYPE IS CLOB(1K).
EXEC SQL END DECLARE SECTION END-EXEC.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
Move 0 to buffer-length.
perform Fetch-Loop thru End-Fetch-Loop
until SQLCODE not equal 0.
* display contents of the buffer.
display buffer-data(1:buffer-length).
Fetch-Loop Section.
EXEC SQL FETCH c1 INTO :empnum, :resume :lobind 2
END-EXEC.
if SQLCODE not equal 0
go to End-Fetch-Loop.
* check to see if the host variable indicator returns NULL.
if lobind less than 0 go to NULL-lob-indicated.
* Value exists. Evaluate the LOB locator.
* Locate the beginning of "Department Information" section.
EXEC SQL VALUES (POSSTR(:resume, 'Department Information'))
INTO :di-begin-loc END-EXEC.
move "VALUES1" to errloc.
call "checkerr" using SQLCA errloc.
go to End-Fetch-Loop.
NULL-lob-indicated.
display "NULL LOB indicated".
End-Fetch-Loop. exit.
End-Prog.
stop run.
For very large objects, files are natural containers. It is likely that most LOBs begin
as data stored in files on the client before they are moved to the database on the
server. The use of file reference variables assists in moving LOB data. Programs
use file reference variables to transfer LOB data from the IFS file directly to the
database engine. To carry out the movement of LOB data, the application does not
have to write utility routines to read and write files using host variables.
A file reference variable has a data type of BLOB, CLOB, or DBCLOB. It is used
either as the source of data (input) or as the target of data (output). The file
reference variable may have a relative file name or a complete path name of the file
(the latter is advised). The file name length is specified within the application
program. The data length portion of the file reference variable is unused during
input. During output, the data length is set by the application requestor code to the
length of the new data that is written to the file.
| When using file reference variables there are different options on both input and
| output. You must choose an action for the file by setting the file_options field in
| the file reference variable structure. Choices for assignment to the field covering
| both input and output values are shown below.
Values (shown for C) and options when using input file reference variables are as
follows:
| v SQL_FILE_READ (Regular file) — This option has a value of 2. This is a file that
| can be open, read, and closed. DB2 determines the length of the data in the file
| (in bytes) when opening the file. DB2 then returns the length through the
| data_length field of the file reference variable structure. (The value for COBOL is
| SQL-FILE-READ.)
Values and options when using output file reference variables are as follows:
| v SQL_FILE_CREATE (New file) — This option has a value of 8. This option
| creates a new file. Should the file already exist, an error message is returned.
| (The value for COBOL is SQL-FILE-CREATE.)
| v SQL_FILE_OVERWRITE (Overwrite file) — This option has a value of 16. This
| option creates a new file if none already exists. If the file already exists, the new
| data overwrites the data in the file. (The value for COBOL is
| SQL-FILE-OVERWRITE.)
| v SQL_FILE_APPEND (Append file) — This option has a value of 32. This option
| has the output appended to the file, if it exists. Otherwise, it creates a new file.
| (The value for COBOL is SQL-FILE-APPEND.)
Note: If a LOB file reference variable is used in an OPEN statement, do not delete
the file associated with the LOB file reference variable until the cursor is
closed.
For more information about integrated file system, see Integrated File System
| Introduction.
C Sample: LOBFILE.SQC
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sql.h>
#include "util.h"
EXEC SQL SELECT resume INTO :resume :lobind FROM emp_resume 3
WHERE resume_format='ascii' AND empno='000130';
if (lobind < 0) {
printf ("NULL LOB indicated \n");
} else {
printf ("Resume for EMPNO 000130 is in file : RESUME.TXT\n");
} /* endif */
EXEC SQL CONNECT RESET;
CHECKERR ("CONNECT RESET");
return 0;
}
/* end of program : LOBFILE.SQC */
Data Division.
Working-Storage Section.
copy "sqlenv.cbl".
copy "sql.cbl".
copy "sqlca.cbl".
EXEC SQL BEGIN DECLARE SECTION END-EXEC. 1
Procedure Division.
Main Section.
display "Sample COBOL program: LOBFILE".
* Get database connection information.
display "Enter your user id (default none): "
with no advancing.
accept userid.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Passwords in a CONNECT statement must be entered in a VARCHAR
* format with the length of the input string.
inspect passwd-name tallying passwd-length for characters
before initial " ".
NULL-LOB-indicated.
display "NULL LOB indicated".
End-Main.
EXEC SQL CONNECT RESET END-EXEC.
move "CONNECT RESET" to errloc.
call "checkerr" using SQLCA errloc.
End-Prog.
stop run.
| The following example shows how to insert data from a regular file referenced by
| :hv_text_file into a CLOB column:
| strcpy(hv_text_file.name, "/home/userid/dirname/filnam.1");
| hv_text_file.name_length = strlen("/home/userid/dirname/filnam.1");
| hv_text_file.file_options = SQL_FILE_READ; /* this is a 'regular' file */
|
| EXEC SQL INSERT INTO CLOBTAB
| VALUES(:hv_text_file);
| For example, say you have a table that holds three columns: ColumnOne Char(10),
| ColumnTwo CLOB(40K), and ColumnThree BLOB(10M). If you were to issue a
| DSPPFM of this table, each row of data would look as follows.
| v For ColumnOne: 10 bytes filled with character data.
| v For ColumnTwo: 22 bytes filled with hex zeros and 16 bytes filled with
| ’*POINTER ’.
| v For ColumnThree: 16 bytes filled with hex zeros and 16 bytes filled with
| ’*POINTER ’.
| The full set of commands that display LOB columns in this way is:
| v Display Physical File Member (DSPPFM)
| v Copy File (CPYF) when the value *PRINT is specified for the TOFILE keyword
| v Display Journal (DSPJRN)
| v Retrieve Journal Entry (RTVJRNE)
| v Receive Journal Entry (RCVJRNE) when the values *TYPE1, *TYPE2, *TYPE3
| and *TYPE4 are specified for the ENTFMT keyword.
| For more information on the Journal handling of LOB columns, refer to the ″Working
| with Journal Entries, Journals and Journal Receivers″ chapter of the Backup and
| Recovery book.
| In addition, SQL UDFs provide the support to manipulate Large Objects and
| DataLink types. While the database provides several built-in functions which are
| useful in working with these datatypes, SQL UDFs provide a way for users to
| further manipulate and enhance the capabilities of the database (to the
| specialization required) in this area.
| When it receives each row, it runs SELECTION_CRITERIA against the data to decide
| if it is interested in processing the data further. Here, every row of table T must
| be passed back to the application. But, if SELECTION_CRITERIA() is implemented
| as a UDF, your application can issue the following statement:
| SELECT C FROM T WHERE SELECTION_CRITERIA(A,B)=1
| In this case, only the rows and column of interest are passed across the interface
| between the application and the database.
| Another case where a UDF can offer a performance benefit is when dealing with
| Large Objects (LOB). Suppose you have a function that extracts some
| information from a value of one of the LOB types. You can perform this extraction
| right on the database server and pass only the extracted value back to the
| application. This is more efficient than passing the entire LOB value back to the
| application and then performing the extraction. The performance value of
| packaging this function as a UDF could be enormous, depending on the
| particular situation. (Note that you can also extract a portion of a LOB by using a
| LOB locator. See “Indicator Variables and LOB Locators” on page 151 for an
| example of a similar scenario.)
v Object Orientation.
| You can implement the behavior of a user-defined distinct type (UDT), also called
| distinct type, using a UDF. For more information on UDTs, see “User-defined
| Distinct Types (UDT)” on page 169. For additional details on UDTs and the
| important concept of castability discussed herein, see the CREATE DISTINCT
| TYPE statement in the DB2 UDB for AS/400 SQL Reference. When you create a
| distinct type, you are automatically provided cast functions between the distinct
| type and its source type. You may also be provided comparison operators such
| as =, >, <, and so on, depending on the source type. You have to provide any
| additional behavior yourself. It is best to keep the behavior of a distinct type in
| the database where all of the users of the distinct type can easily access it. You
| can use UDFs, therefore, as the implementation mechanism.
| For example, suppose that you have a BOAT distinct type, defined over a one
| megabyte BLOB. The type create statement:
| CREATE DISTINCT TYPE BOAT AS BLOB(1MEG)
| This simple example returns all the boats from BOATS_INVENTORY from the
| same designer that are bigger than a particular boat in MY_BOATS. Note that
| the example only passes the rows of interest back to the application because the
| comparison occurs in the database server. In fact, it completely avoids passing
| any values of data type BOAT. This is a significant improvement in storage and
| performance as BOAT is based on a one megabyte BLOB data type.
UDF Concepts
The following is a discussion of the important concepts you need to know prior to
coding UDFs:
Function Name
| v Full name of a function.
| However, you may also omit the <schema-name>., in which case, DB2 must
| determine the function to which you are referring. For example:
| SNOWBLOWER_SIZE FOO SUBSTR FLOOR
| v Path
| The concept of path is central to DB2’s resolution of unqualified references that
| occur when schema-name is not specified. For the use of path in DDL statements
| that refer to functions, see the description of the corresponding CREATE
| FUNCTION statement in the DB2 UDB for AS/400 SQL Reference. The path is
| an ordered list of schema names. It provides a set of schemas for resolving
| unqualified references to UDFs as well as UDTs. In cases where a function
| reference matches functions in more than one schema in the path, the order of
| the schemas in the path is used to resolve this match. The path is established by
| means of the SQLPATH option on the precompile and bind commands for static
| SQL. The path is set by the SET PATH statement for dynamic SQL. When the
| first SQL statement that runs in an activation group runs with SQL naming, the
| path has the following default value:
| "QSYS","QSYS2","<ID>"
| This applies to both static and dynamic SQL, where <ID> represents the current
| statement authorization ID.
| When the first SQL statement in an activation group runs with system naming,
| the default path is *LIBL.
| v Overloaded function names.
| Function names can be overloaded, which means that multiple functions, even in
| the same schema, can have the same name. Two functions cannot, however,
| have the same signature, which can be defined to be the qualified function name
| concatenated with the defined data types of all the function parameters in the
| order in which they are defined. For an example of an overloaded function, see
| “Example: BLOB String Search” on page 162. See the DB2 UDB for AS/400 SQL
| Reference book for more information on signature and function resolution.
| v Function resolution.
| It is the function resolution algorithm that takes into account the facts of
| overloading and function path to choose the best fit for every function reference,
| whether it is a qualified or an unqualified reference. All functions, even built-in
| functions, are processed through the function selection algorithm.
| v Types of function.
| There are several types of functions:
| – Built-in. These are functions provided by and shipped with the database.
| SUBSTR() is an example.
| – System-generated. These are functions implicitly generated by the database
| engine when a DISTINCT TYPE is created. These functions provide casting
| operations between the DISTINCT TYPE and its base type.
| – User-defined. These are functions created by users and registered to the
| database.
| A column function receives a set of like values (a column of data) and returns a
| single value answer from this set of values. These are also called aggregating
| functions in DB2. Some built-in functions are column functions. An example of a
| column function is the built-in function AVG(). An external UDF cannot be defined
| as a column function. However, a sourced UDF is defined to be a column
| function if it is sourced on one of the built-in column functions. The latter is useful
| for distinct types. For example, if a distinct type SHOESIZE exists that is defined
| with base type INTEGER, you could define a UDF, AVG(SHOESIZE), as a column
| function sourced on the existing built-in column function, AVG(INTEGER).
| The concept of path, the SET PATH statement, and the function resolution algorithm
| are discussed in detail in the DB2 UDB for AS/400 SQL Reference. The SQLPATH
| precompile option is discussed in the command appendix.
Implementing UDFs
There are three types of UDFs: sourced, external, and SQL. The implementation of
each type is considerably different.
v Sourced UDFs. These are simply functions registered to the database that
themselves reference another function. They, in effect, map the sourced function.
As such, nothing more is required in implementing these functions than
registering them to the database using the CREATE FUNCTION statement.
v External functions. These are references to programs and service programs
written in a high level language such as C, COBOL, or RPG. Once the function is
registered to the database, the database will invoke the program or service
program whenever the function is referenced in a DML statement. As such,
external UDFs require that the UDF writer, besides knowing the high level
language and how to develop code in it, must understand the interface between
the program and the database. See “Chapter 9. Writing User-Defined Functions
(UDFs)” on page 185 for more information on writing external functions.
v SQL UDFs. SQL UDFs are functions written entirely in the SQL language. Their
’code’ is actually SQL statements embedded within the CREATE FUNCTION
statement itself. SQL UDFs provide several advantages:
– They are written in SQL, making them quite portable.
– Defining the interface between the database and the function is by use of SQL
declares, with no need to worry about details of actual parameter passing.
– They allow the passing of large objects, datalinks, and UDTs as parameters,
and subsequent manipulation of them in the function itself. More information
about SQL functions can be found in “Chapter 9. Writing User-Defined
Functions (UDFs)” on page 185.
1. Registering the UDF with DB2. Regardless of which type of UDF is being
created, they all need to be registered to the database using the CREATE
FUNCTION statement. In the case of source functions, this registration step
does everything necessary to define the function to the database. For SQL
UDFs, the CREATE FUNCTION statement contains everything necessary to
| After these steps are successfully completed, your UDF is ready for use in data
| manipulation language (DML) or data definition language (DDL) statements such as
| CREATE VIEW.
| Registering UDFs
| A UDF must be registered in the database before the function can be recognized
| and used by the database. You can register a UDF using the CREATE FUNCTION
| statement. You can find detailed explanations for this statement and its options in
| the DB2 UDB for AS/400 SQL Reference.
| The statement allows you to specify the language and name of the program, along
| with options such as DETERMINISTIC, ALLOW PARALLEL, and RETURN NULLS
| ON NULL INPUT. These options help to more specifically identify to the database
| the intention of the function and how calls to the database can be optimized.
| You should register the UDF to DB2 after you have written and completely tested
| the actual code. It is possible to define the UDF prior to actually writing it. However,
| to avoid any problems with running your UDF, you are encouraged to write and test
| it extensively before registering it. For information on testing your UDF, see
| “Chapter 9. Writing User-Defined Functions (UDFs)” on page 185.
| Example: Exponentiation
| Suppose you have written an external UDF to perform exponentiation of floating
| point values, and wish to register it in the MATH schema.
| CREATE FUNCTION MATH.EXPON (DOUBLE, DOUBLE)
| RETURNS DOUBLE
| EXTERNAL NAME 'MYLIB/MYPGM(MYENTRY)'
| LANGUAGE C
| PARAMETER STYLE DB2SQL
| NO SQL
| In this example, the system uses the RETURNS NULL ON NULL INPUT default
| value. This is desirable since you want the result to be NULL if either argument is
| NULL. Since you do not require a scratchpad and no final call is necessary, the NO
| SCRATCHPAD and NO FINAL CALL default values are used. As there is no reason
| why EXPON cannot be parallel, the ALLOW PARALLEL value is specified.
Additionally, Willie has written the function to return a FLOAT result. Suppose you
know that when it is used in SQL, it should always return an INTEGER. You can
create the following function:
CREATE FUNCTION FINDSTRING (CLOB(500K), VARCHAR(200))
RETURNS INTEGER
CAST FROM FLOAT
SPECIFIC "willie_find_feb95"
EXTERNAL NAME 'MYLIB/MYPGM(FINDSTR)'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION
RETURNS NULL ON NULL INPUT
| Note that a CAST FROM clause is used to specify that the UDF body really returns
| a FLOAT value but you want to cast this to INTEGER before returning the value to
| the statement which used the UDF. As discussed in the DB2 UDB for AS/400 SQL
| Reference, the INTEGER built-in function can perform this cast for you. Also, you
| wish to provide your own specific name for the function and later reference it in
| DDL (see “Example: String Search over UDT” on page 163). Because the UDF was
| not written to handle NULL values, you use the RETURNS NULL ON NULL INPUT.
| And because there is no scratchpad, you use the NO SCRATCHPAD and NO
| FINAL CALL default values. As there is no reason why FINDSTRING cannot be
| parallel, the ALLOW PARALLELISM default value is used.
Note that this FINDSTRING function has a different signature from the
FINDSTRING functions in “Example: BLOB String Search” on page 162, so there is
no problem overloading the name. You wish to provide your own specific name for
possible later reference in DDL. Because you are using the SOURCE clause, you
cannot use the EXTERNAL NAME clause or any of the related keywords specifying
function attributes. These attributes are taken from the source function. Finally,
observe that in identifying the source function you are using the specific function
name explicitly provided in “Example: BLOB String Search” on page 162. Because
this is an unqualified reference, the schema in which this source function resides
must be in the function path, or the reference will not be resolved.
| Observe that CAST FROM and SPECIFIC are not specified, but that NOT
| DETERMINISTIC is specified.
| Note that in the SOURCE clause you have qualified the function name, just in case
| there might be some other AVG function lurking in your SQL path.
| Example: Counting
| Your simple counting function returns a 1 the first time and increments the result by
| one each time it is called. This function takes no SQL arguments, and by definition
| it is a NOT DETERMINISTIC function since its answer varies from call to call. It
| uses the scratchpad to save the last value returned, and each time it is invoked it
| increments this value and returns it.
| CREATE FUNCTION COUNTER ()
| RETURNS INT
| EXTERNAL NAME 'MYLIB/MYFUNCS(CTR)'
| LANGUAGE C
| PARAMETER STYLE DB2SQL
| NO SQL
| NOT DETERMINISTIC
| NOT FENCED
| SCRATCHPAD 4
| DISALLOW PARALLEL
| Note that no parameter definitions are provided, just empty parentheses. The above
| function specifies SCRATCHPAD, and uses the default specification of NO FINAL
| CALL. In this case, the size of the scratchpad is set to only 4 bytes, which is
| sufficient for a counter. Since the COUNTER function requires that a single
| scratchpad be used to operate properly, DISALLOW PARALLEL is added to prevent
| DB2 from operating it in parallel.
Using UDFs
Scalar and column UDFs can be invoked within an SQL statement almost
everywhere that an expression is valid. There are a few restrictions of UDF usage,
however:
v UDFs and system generated functions cannot be specified in check constraints.
Check constraints also cannot contain references to the built-in functions
DLVALUE, DLURLPATH, DLURLPATHONLY, DLURLSCHEME,
DLURLCOMPLETE, or DLURLSERVER.
v External UDFs, SQL UDFS and the built-in functions DLVALUE, DLURLPATH,
DLURLPATHONLY, DLURLSCHEME, DLURLCOMPLETE, and DLURLSERVER
cannot be referenced in an ORDER BY or GROUP BY clause, unless the SQL
statement is read-only and allows temporary processing (ALWCPYDTA(*YES) or
(*OPTIMIZE).
Refer to “UDF Concepts” on page 158 for a summary of the use and importance of
the path and the function resolution algorithm. You can find the details for both of
these concepts in the DB2 UDB for AS/400 SQL Reference. The resolution of any
Data Manipulation Language (DML) reference to a function uses the function
resolution algorithm, so it is important to understand how it works.
Referring to Functions
Each reference to a function, whether it is a UDF, or a built-in function, contains the
following syntax:
ÊÊ function_name ( ) ÊÍ
ALL ,
DISTINCT
· expression
| The position of the arguments is important and must conform to the function
| definition for the semantics to be correct. Both the position of the arguments and
| the function definition must conform to the function body itself. DB2 does not
| attempt to shuffle arguments to better match a function definition, and DB2 does not
| attempt to determine the semantics of the individual function parameters.
As the function selection logic does not know what data type the argument may turn
out to be, it cannot resolve the reference. You can use the CAST specification to
provide a type for the parameter marker, for example INTEGER, and then the
function selection logic can proceed:
BLOOP(CAST(? AS INTEGER))
Only the BLOOP functions in schema PABLO are considered. It does not matter
that user SERGE has defined a BLOOP function, or whether or not there is a
built-in BLOOP function. Now suppose that user PABLO has defined two BLOOP
functions in his schema:
CREATE FUNCTION BLOOP (INTEGER) RETURNS ...
CREATE FUNCTION BLOOP (DOUBLE) RETURNS ...
BLOOP is thus overloaded within the PABLO schema, and the function selection
algorithm would choose the best BLOOP, depending on the data type of the
argument, column1. In this case, both of the PABLO.BLOOPs take numeric
arguments, and if column1 is not one of the numeric types, the statement will fail.
On the other hand if column1 is either SMALLINT or INTEGER, function selection
will resolve to the first BLOOP, while if column1 is DECIMAL or DOUBLE, the
second BLOOP will be chosen.
| You should investigate these other functions in the DB2 UDB for AS/400 SQL
| Reference. The INTEGER function is a built-in function in the QSYS2 schema.
| You have created the two BLOOP functions cited in “Using Qualified Function
| Reference” on page 166, and you want and expect one of them to be chosen. If the
| following default function path is used, the first BLOOP is chosen (since column1 is
| INTEGER), if there is no conflicting BLOOP in QSYS or QSYS2:
| "QSYS","QSYS2","PABLO"
| However, suppose you have forgotten that you are using a script for precompiling
and binding which you previously wrote for another purpose. In this script, you
explicitly coded your SQLPATH parameter to specify the following function path for
another reason that does not apply to your current work:
"KATHY","QSYS","QSYS2","PABLO"
If Kathy has written a BLOOP function for her own purposes, the function selection
could very well resolve to Kathy’s function, and your statement would execute
without error. You are not notified because DB2 assumes that you know what you
are doing. It becomes your responsibility to identify the incorrect output from your
statement and make the required correction.
Note that you are not permitted to overload the built-in conditional operators such
as >, =, LIKE, IN, and so on, in this way.
v The function selection algorithm does not consider the context of the reference in
resolving to a particular function. Look at these BLOOP functions, modified a bit
from before:
CREATE FUNCTION BLOOP (INTEGER) RETURNS INTEGER ...
CREATE FUNCTION BLOOP (DOUBLE) RETURNS CHAR(10)...
Because the best match, resolved using the SMALLINT argument, is the first
BLOOP defined above, the second operand of the CONCAT resolves to data
type INTEGER. The statement fails because CONCAT demands string
arguments. If the first BLOOP was not present, the other BLOOP would be
chosen and the statement execution would be successful.
| v UDFs can be defined with parameters or results having any of the LOB types:
| BLOB, CLOB, or DBCLOB. DB2 will materialize the entire LOB value in storage
| before invoking such a function, even if the source of the value is a LOB locator
| host variable. For example, consider the following fragment of a C language
| application:
| EXEC SQL BEGIN DECLARE SECTION;
| SQL TYPE IS CLOB(150K) clob150K ; /* LOB host var */
| SQL TYPE IS CLOB_LOCATOR clob_locator1; /* LOB locator host var */
| char string[40]; /* string host var */
| EXEC SQL END DECLARE SECTION;
If there are multiple BOAT distinct types in the database, or BOAT UDFs in other
schema, you must exercise care with your function path. Otherwise your results
may be ambiguous.
Defining a UDT
UDTs, like other objects such as tables, indexes, and UDFs, need to be defined
with a CREATE statement.
Use the CREATE DISTINCT TYPE statement to define your new UDT. Detailed
explanations for the statement syntax and all its options are found in the DB2 UDB
for AS/400 SQL Reference.
Example: Money
Suppose you are writing applications that need to handle different currencies and
wish to ensure that DB2 does not allow these currencies to be compared or
manipulated directly with one another in queries. Remember that conversions are
necessary whenever you want to compare values of different currencies. So you
define as many UDTs as you need; one for each currency that you may need to
represent:
| CREATE DISTINCT TYPE US_DOLLAR AS DECIMAL (9,2)
| CREATE DISTINCT TYPE CANADIAN_DOLLAR AS DECIMAL (9,2)
| CREATE DISTINCT TYPE GERMAN_MARK AS DECIMAL (9,2)
| Example: Resume
| Suppose you would like to keep the form filled by applicants to your company in a
| DB2 table and you are going to use functions to extract the information from these
| forms. Because these functions cannot be applied to regular character strings
| (because they are certainly not able to find the information they are supposed to
| return), you define a UDT to represent the filled forms:
| CREATE DISTINCT TYPE PERSONAL.APPLICATION_FORM AS CLOB(32K)
Example: Sales
Suppose you want to define tables to keep your company’s sales in different
countries as follows:
CREATE TABLE US_SALES
(PRODUCT_ITEM INTEGER,
MONTH INTEGER CHECK (MONTH BETWEEN 1 AND 12),
YEAR INTEGER CHECK (YEAR > 1985),
TOTAL US_DOLLAR)
The UDTs in the above examples are created using the same CREATE DISTINCT
TYPE statements in “Example: Money” on page 171. Note that the above examples
use check constraints. For information on check constraints see the DB2 UDB for
AS/400 SQL Reference.
| You have fully qualified the UDT name because its qualifier is not the same as your
| authorization ID and you have not changed the default function path. Remember
| that whenever type and function names are not fully qualified, DB2 searches
| through the schemas listed in the current function path and looks for a type or
| function name matching the given unqualified name.
| .
Manipulating UDTs
One of the most important concepts associated with UDTs is strong typing. Strong
typing guarantees that only functions and operators defined on the UDT can be
applied to its instances.
Strong typing is important to ensure that the instances of your UDTs are correct.
For example, if you have defined a function to convert US dollars to Canadian
dollars according to the current exchange rate, you do not want this same function
to be used to convert German marks to Canadian dollars because it will certainly
return the wrong amount.
As a consequence of strong typing, DB2 does not allow you to write queries that
compare, for example, UDT instances with instances of the UDT source type. For
the same reason, DB2 will not let you apply functions defined on other types to
UDTs. If you want to compare instances of UDTs with instances of another type,
you have to cast the instances of one or the other type. In the same sense, you
have to cast the UDT instance to the type of the parameter of a function that is not
defined on a UDT if you want to apply this function to a UDT instance.
Because you cannot compare US dollars with instances of the source type of US
dollars (that is, DECIMAL) directly, you have used the cast function provided by
DB2 to cast from DECIMAL to US dollars. You can also use the other cast function
provided by DB2 (that is, the one to cast from US dollars to DECIMAL) and cast the
column total to DECIMAL. Either way you decide to cast, from or to the UDT, you
can use the cast specification notation to perform the casting, or the functional
notation. That is, you could have written the above query as:
SELECT PRODUCT_ITEM
FROM US_SALES
WHERE TOTAL > CAST (100000 AS us_dollar)
AND MONTH = 7
AND YEAR = 1992
At first glance, such a UDF may appear easy to write. However, not all C compilers
support DECIMAL values. The UDTs representing different currencies have been
defined as DECIMAL. Your UDF will need to receive and return DOUBLE values,
since this is the only data type provided by C that allows the representation of a
DECIMAL value without losing the decimal precision. Thus, your UDF should be
defined as follows:
| CREATE FUNCTION CDN_TO_US_DOUBLE(DOUBLE) RETURNS DOUBLE
| EXTERNAL NAME 'MYLIB/CURRENCIES(C_CDN_US)'
| LANGUAGE C
| PARAMETER STYLE DB2SQL
| NO SQL
| NOT DETERMINISTIC
The question now is, how do you pass Canadian dollars to this UDF and get U.S.
dollars from it? The Canadian dollars must be cast to DECIMAL values. The
DECIMAL values must be cast to DOUBLE. You also need to have the returned
DOUBLE value cast to DECIMAL and the DECIMAL value cast to U.S. dollars.
Such casts are performed automatically by DB2 anytime you define sourced UDFs,
whose parameter and return type do not exactly match the parameter and return
type of the source function. Therefore, you need to define two sourced UDFs. The
first brings the DOUBLE values to a DECIMAL representation. The second brings
the DECIMAL values to the UDT. That is, you define the following:
CREATE FUNCTION CDN_TO_US_DEC (DECIMAL(9,2)) RETURNS DECIMAL(9,2)
SOURCE CDN_TO_US_DOUBLE (DOUBLE)
That is, C1 (in Canadian dollars) is cast to decimal which in turn is cast to a double
value that is passed to the CDN_TO_US_DOUBLE function. This function accesses the
exchange rate file and returns a double value (representing the amount in U.S.
dollars) that is cast to decimal, and then to U.S. dollars.
A function to convert German marks to U.S. dollars would be similar to the example
above:
| CREATE FUNCTION GERMAN_TO_US_DOUBLE(DOUBLE)
| RETURNS DOUBLE
| EXTERNAL NAME 'MYLIB/CURRENCIES(C_GER_US)'
| LANGUAGE C
| PARAMETER STYLE DB2SQL
| NO SQL
| NOT DETERMINISTIC
|
| CREATE FUNCTION GERMAN_TO_US_DEC (DECIMAL(9,2))
| RETURNS DECIMAL(9,2)
| SOURCE GERMAN_TO_US_DOUBLE(DOUBLE)
|
| CREATE FUNCTION US_DOLLAR(GERMAN_MARK) RETURNS US_DOLLAR
| SOURCE GERMAN_TO_US_DEC (DECIMAL())
Because you cannot directly compare US dollars with Canadian dollars or German
Marks, you use the UDF to cast the amount in Canadian dollars to US dollars, and
the UDF to cast the amount in German Marks to US dollars. You cannot cast them
all to DECIMAL and compare the converted DECIMAL values because the amounts
are not monetarily comparable. That is, the amounts are not in the same currency.
You want to know the total of sales in Germany for each product in the year of
1994. You would like to obtain the total sales in US dollars:
SELECT PRODUCT_ITEM, US_DOLLAR (SUM (TOTAL))
FROM GERMAN_SALES
WHERE YEAR = 1994
GROUP BY PRODUCT_ITEM
You could not write SUM (us_dollar (total)), unless you had defined a SUM
function on US dollar in a manner similar to the above.
You do not explicitly invoke the cast function to convert the character string to the
UDT personal.application_form . This is because DB2 lets you assign instances
of the source type of a UDT to targets having that UDT.
You made use of DB2’s cast specification to tell DB2 that the type of the parameter
marker is CLOB(32K), a type that is assignable to the UDT column. Remember that
you cannot declare a host variable of a UDT type, since host languages do not
support UDTs. Therefore, you cannot specify that the type of a parameter marker is
a UDT.
Now suppose your supervisor requests that you maintain the annual total sales in
US dollars of each product and in each country, in separate tables:
You cast Canadian dollars to US dollars and German Marks to US dollars because
UDTs are union compatible only with the same UDT. You must use the functional
notation to cast between UDTs since the cast specification only lets you cast
between UDTs and their source types.
| All the function provided by DB2 LOB support is applicable to UDTs whose source
| type are LOBs. Therefore, you have used LOB file reference variables to assign the
| contents of the file into the UDT column. You have not used the cast function to
| convert values of BLOB type into your e-mail type. This is because DB2 let you
| assign values of the source type of a distinct type to targets to the distinct type.
You have used the UDFs defined on the UDT in this SQL query since they are the
only means to manipulate the UDT. In this sense, your UDT e-mail is completely
encapsulated. That is, its internal representation and structure are hidden and can
only be manipulated by the defined UDFs. These UDFs know how to interpret the
data without the need to expose its representation.
Suppose you need to know the details of all the e-mail your company received in
1994 which had to do with the performance of your products in the marketplace.
SELECT SENDER (MESSAGE), SENDING_DATE (MESSAGE), SUBJECT (MESSAGE)
FROM ELECTRONIC_MAIL
You have used the contains UDF which is capable of analyzing the contents of the
message searching for relevant keywords or synonyms.
Because your host variable is of type BLOB locator (the source type of the UDT),
you have explicitly converted the BLOB locator to your UDT, whenever it was used
as an argument of a UDF defined on the UDT.
Using DataLinks
The DataLink data type is one of the basic building blocks for extending the types
of data that can be stored in database files. The idea of a DataLink is that the
actual data stored in the column is only a pointer to the object. This object can be
anything, an image file, a voice recording, a text file, etc. The method used for
resolving to the object is to store a Uniform Resource Locator (URL). This means
that a row in a table can be used to contain information about the object in
traditional data types, and the object itself can be referenced using the DataLink
data type. The user can use new SQL scalar functions to get back the path to the
object and the server on which the object is stored. With the DataLink data type,
there is a fairly loose relationship between the row and the object. For instance,
deleting a row will sever the relationship to the object referenced by the DataLink,
but the object itself might not be deleted.
An SQL table created with a DataLink column can be used to hold information
about an object, without actually containing the object itself. This concept gives the
user much more flexibility in the types of data that can be managed using an SQL
table. If, for instance, the user has thousands of video clips stored in the integrated
file system of their AS/400, they may want to use an SQL table to contain
information about these video clips. But since the user already has the objects
stored in a directory, they simply want the SQL table to contain references to the
objects, not contain the actual bytes of storage. A good solution would be to use
DataLinks. The SQL table would use traditional SQL data types to contain
information about each clip, such as title, length, date, etc. But the clip itself would
be referenced using a DataLink column. Each row in the table would store a URL
Using DataLinks also gives control over the objects while they are in ″linked″ status.
A DataLink column can be created such that the referenced object cannot be
deleted, moved, or renamed while there is a row in the SQL table that references
that object. This object would be considered linked. Once the row containing that
reference is deleted, the object is unlinked. To understand this concept fully, one
should know the levels of control that can be specified when creating a DataLink
column. Refer to the SQL Reference for the exact syntax used when creating
DataLink columns.
NO LINK CONTROL
When a column is created with NO LINK CONTROL, there is no linking that takes
place when rows are added to the SQL table. The URL is verified to be syntactically
correct, but there is no check to make sure that the server is accessible, or that the
file even exists.
The integrated file system is still responsible for managing permissions for the
linked object. The permissions are not modified during the link or unlink processes.
This option provides control of the object’s existence for the duration of time that it
is linked.
This option provides the control of preventing updates to the linked object for users
trying to access the object by direct means. Since the only access to the object is
by obtaining the access token from an SQL operation, an administrator can
effectively control access to the linked objects by using the database permissions to
the SQL table that contains the DataLink column.
When working with DataLinks, there are several steps that must be taken to
properly configure the system:
v TCP/IP must be configured on any systems that are going to be used when
working with DataLinks. This would include the systems on which the SQL tables
with DataLink columns are going to be created, as well as the systems that will
contain the objects to be linked. In most cases, this will be the same system.
Since the URL that is used to reference the object contains a TCP/IP server
name, this name must be recognized by the system that is going to contain the
DataLink. The command CFGTCP can be used to configure the TCP/IP names,
or to register a TCP/IP name server.
v The system that contains the SQL tables must have the Relational Database
Directory updated to reflect the local database system, and any optional remote
systems. The command WRKRDBDIRE can be used to add or modify
Once the DLFM has been started, there are some steps needed to configure the
DLFM. These DLFM functions are available via an executable script that can be
entered from the QShell interface. To get to the interactive shell interface, use the
CL command QSH. This will bring up a command entry screen from which you can
enter the DLFM script commands. The script command dfmadmin -help can be
used to display help text and syntax diagrams. For the most commonly used
functions, CL commands have also been provided. Using the CL commands, most
or all of the DLFM configuration can be accomplished without using the script
interface. Depending on your preferences, you can choose to use either the script
commands from the QSH command entry screen or the CL commands from the CL
command entry screen.
Adding a prefix - A prefix is a path or directory that will contain objects to be linked.
When setting up the DLFM on a system, the administrator must add any prefixes
that will be used for DataLinks. The script command dfmadmin -add_prefix is used
to add prefixes. The CL command to add prefixes is ADDPFXDLFM.
or
file://TESTSYS1/mydir/datalinks/text/story1.txt
For instance, on server TESTSYS1, where you have already added the
/mydir/datalinks/ prefix, you want SQL tables on the local system in either library
Once the DLFM has been started, and the prefixes and host database names have
been registered, you can begin linking objects in the file system.
| As a consequence of being invoked from such a low level, there are certain
| resources (locks and seizes) being held at the time the UDF is invoked and for the
| duration of the UDF execution. These resources are primarily locks on any tables
| and indexes involved in the SQL statement that is invoking the UDF. Due to these
| held resources, it is important that the UDF not perform operations that may take an
| extended period of time (minutes or hours). Because of the critical nature of holding
| resources for long periods of time, the database only waits for a certain period of
| time for the UDF to finish. If the UDF does not finish in the time allocated, the SQL
| statement will fail, which can be quite aggravating to the end user.
| The default UDF wait time used by the database should be more than sufficient to
| allow a normal UDF to run to completion. However, if you have a long running UDF
| and wish to increase the wait time, this can be done using the UDF_TIME_OUT
| option in the query INI file. See “Query Options File QAQQINI” on page 543 for
| more information on the INI file. Note, however, that there is a maximum time limit
| that the database will not exceed, regardless of the value specified for
| UDF_TIME_OUT.
| Since resources are held while the UDF is run, it is important that the UDF not
| operate on the same tables or indexes allocated for the original SQL statement or, if
| it does, that it does not perform an operation that conflicts with the one being
| performed in the SQL statement. Specifically, the UDF should not try to perform any
| insert, update or delete record operation on those tables.
| Because the UDF runs in the same job as the SQL statement, it shares much of the
| same environment as the SQL statement. However, because it runs under a
| separate thread, the following threads considerations apply:
| v the UDF will conflict with thread level resources held by the SQL statement’s
| thread. Primarily, these are the table resources discussed above.
| v UDFs do not inherent any program adopted authority that may have been active
| at the time the SQL statement was invoked. UDF authority comes from either the
| authority associated with the UDF program itself or the authority of the user
| running the SQL statement.
| v the UDF cannot perform any operation that is blocked from being run in a
| secondary thread.
| v the UDF program must be created such that it either runs under a named
| activation group or in the activation group of its caller (ACTGRP parameter).
| Programs that specify ACTGRP(*NEW) will not be allowed to run as UDFs.
| Parallel processing
| A UDF can be defined to allow parallel processing. This means that the same UDF
| program can be running in multiple threads at the same time. Therefore, if ALLOW
| PARALLEL is specified for the UDF, ensure that it is thread safe. For more
| information about threads, see Database considerations for multithreaded
| programming.
|
| Writing function code
| Writing function code involves knowing how to write the SQL or external function to
| perform the function. It also involves understanding the interface between the
| database and the function code to define it correctly, and determining packaging
| options when creating the executable program.
| The CREATE FUNCTION statement for SQL functions follow the general flow of:
| CREATE FUNCTION function name(parameters) RETURNS return value
| LANGUAGE SQL
| BEGIN
| sql statements
| END
| The creation of an SQL function causes the registration of the UDF, generates the
| executable code for the function, and defines to the database the details of how
| parameters are actually passed. Therefore, writing these functions is quite clean
| and provides less chance of introducing errors into the function.
| When defining and using the parameters in the UDF, care should be taken to
| ensure that no more storage is referenced for a given parameter than is defined for
| that parameter. The parameters are all stored in the same space and exceeding a
| given parameter’s storage space can overwrite another parameter’s value. This, in
| turn, can cause the function to see invalid input data or cause the value returned to
| the database to be invalid.
| There are four supported parameter styles available to external UDFs. For the most
| part, the styles differ in how many parameters are passed to the external program
| or service program.
| Parameter style SQL: The parameter style SQL conforms to the industry standard
| Structured Query Language (SQL). With parameter style SQL, the parameters are
| passed into the external program as follows (in the order specified):
|
| ÊÊ SQL-result Ê
|
· SQL-argument · SQL-argument-ind
|
| SQL-argument
| This argument is set by DB2 before calling the UDF. This value repeats n
| times, where n is the number of arguments specified in the function
| reference. The value of each of these arguments is taken from the
| expression specified in the function invocation. It is expressed in the data
| type of the defined parameter in the create function statement. Note: These
| parameters are treated as input only; any changes to the parameter values
| made by the UDF are ignored by DB2.
| SQL-result
| This argument is set by the UDF before returning to DB2. The database
| provides the storage for the return value. Since the parameter is passed by
| address, the address is of the storage where the return value should be
| placed. The database provides as much storage as needed for the return
| value as defined on the CREATE FUNCTION statement. If the CAST
| FROM clause is used in the CREATE FUNCTION statement, DB2 assumes
| the UDF returns the value as defined in the CAST FROM clause, otherwise
| DB2 assumes the UDF returns the value as defined in the RETURNS
| clause.
| SQL-argument-ind
| This argument is set by DB2 before calling the UDF. It can be used by the
| UDF to determine if the corresponding SQL-argument is null or not. The nth
| SQL-argument-ind corresponds to the nth SQL-argument, described
| previously. Each indicator is defined as a two-byte signed integer. It is set to
| one of the following values:
| 0 The argument is present and not null.
| -1 The argument is null.
| If the function is defined with RETURNS NULL ON NULL INPUT, the UDF
| does not need to check for a null value. However, if it is defined with
| CALLS ON NULL INPUT, any argument can be NULL and the UDF should
| check for null input. Note: these parameters are treated as input only; any
| changes to the parameter values made by the UDF are ignored by DB2.
| SQL-result-ind
| This argument is set by the UDF before returning to DB2. The database
| provides the storage for the return value. The argument is defined as a
| two-byte signed integer. If set to a negative value, the database interprets
| the result of the function as null. If set to zero or a positive value, the
| database uses the value returned in SQL-result. The database provides the
| storage for the return value indicator. Since the parameter is passed by
| address, the address is of the storage where the indicator value should be
| placed.
| SQL-state
| This argument is a CHAR(5) value that represents the SQLSTATE.
| This parameter is passed in from the database set to '00000' and can be
| set by the function as a result state for the function. While normally the
| SQLSTATE is not set by the function, it can be used to signal an error or
| warning to the database as follows:
| 01Hxx The function code detected a warning situation. This results in an
| SQL warning, Here xx may be one of several possible strings.
| See Appendix B for more information on valid SQLSTATEs that the function
| may use.
| function-name
| This argument is set by DB2 before calling the UDF. It is a VARCHAR(139)
| value that contains the name of the function on whose behalf the function
| code is being invoked.
| The form of the function name that is passed is:
| <schema-name>.<function-name>
| This parameter is useful when the function code is being used by multiple
| UDF definitions so that the code can distinguish which definition is being
| invoked. Note: This parameter is treated as input only; any changes to the
| parameter value made by the UDF are ignored by DB2.
| specific-name
| This argument is set by DB2 before calling the UDF. It is a VARCHAR(128)
| value that contains the specific name of the function on whose behalf the
| function code is being invoked.
| Like function-name, this parameter is useful when the function code is
| being used by multiple UDF definitions so that the code can distinguish
| which definition is being invoked. See the CREATE FUNCTION for more
| information about specific-name. Note: This parameter is treated as input
| only; any changes to the parameter value made by the UDF are ignored by
| DB2.
| diagnostic-message
| This argument is set by DB2 before calling the UDF. It is a VARCHAR(70)
| value that can be used by the UDF to send message text back when an
| SQLSTATE warning or error is signaled by the UDF.
| It is initialized by the database on input to the UDF and may be set by the
| UDF with descriptive information. Message text is ignored by DB2 unless
| the SQL-state parameter is set by the UDF.
| Parameter style DB2SQL: With the DB2SQL parameter style, the same
| parameters and same order of parameters are passed into the external program or
| service program as are passed in for parameter style SQL. However, DB2SQL
| allows additional optional parameters to be passed along as well. If more than one
| of the optional parameters below is specified in the UDF definition, they are passed
| to the UDF in the order defined below. Refer to parameter style SQL for the
| common parameters.
| ÊÊ SQL-result Ê
|
· SQL-argument · SQL-argument-ind
|
| scratchpad
| This argument is set by DB2 before calling the UDF. It is only present if the
| CREATE FUNCTION statement for the UDF specified the SCRATCHPAD
| keyword. This argument is a structure with the following elements:
| v An INTEGER containing the length of the scratchpad.
| v The actual scratchpad, initialized to all binary 0’s by DB2 before the first
| call to the UDF.
| The scratchpad can be used by the UDF either as working storage or as
| persistent storage, since it is maintained across UDF invocations.
| call-type
| This argument is set by DB2 before calling the UDF. It is only present if the
| CREATE FUNCTION statement for the UDF specified the FINAL CALL
| keyword. It is an INTEGER value that contains one of the following values:
| -1 This is the first call to the UDF for this statement. A first call is a
| normal call in that all SQL argument values are passed.
| 0 This is a normal call. (All the normal input argument values are
| passed).
| 1 This is a final call. No SQL-argument or SQL-argument-ind values
| are passed. A UDF should not return any answer using the
| SQL-result or SQL-result-ind arguments. Both of these are ignored
| by DB2 upon return from the UDF. However, the UDF may set the
| SQL-state and diagnostic-message arguments. These arguments
| are handled in a way similar to other calls to the UDF.
| dbinfo This argument is set by DB2 before calling the UDF. It is only present if the
| CREATE FUNCTION statement for the UDF specifies the DBINFO keyword.
| The argument is a structure whose definition is contained in the sqludf
| include.
| Parameter Style GENERAL (or SIMPLE CALL): With parameter style GENERAL,
| the parameters are passed into the external service program just as they are
| specified in the CREATE FUNCTION statement. The format is:
ÊÊ SQL-result = func ( · ) ÊÍ
SQL-argument
|
| SQL-argument
| This argument is set by DB2 before calling the UDF. This value repeats n
| times, where n is the number of arguments specified in the function
| reference. The value of each of these arguments is taken from the
| expression specified in the function invocation. It is expressed in the data
| type of the defined parameter in the CREATE FUNCTION statement. Note:
| These parameters are treated as input only; any changes to the parameter
| values made by the UDF are ignored by DB2.
| Note: The return value must be defined as a simple structure. If, instead,
| the function was defined as simply int func1(), the return value would not be
| available for DB2 to use.
|
|
ÊÊ SQL-result = funcname ( · Ê
SQL-argument
| Ê SQL-result-ind ) ÊÍ
| SQL-argument-ind-array
|
| SQL-argument
| This argument is set by DB2 before calling the UDF. This value repeats n
| times, where n is the number of arguments specified in the function
| reference. The value of each of these arguments is taken from the
| expression specified in the function invocation. It is expressed in the data
| type of the defined parameter in the CREATE FUNCTION statement. Note:
| These parameters are treated as input only; any changes to the parameter
| values made by the UDF are ignored by DB2.
| SQL-argument-ind-array
| This argument is set by DB2 before calling the UDF. It can be used by the
| UDF to determine if one or moreSQL-arguments are null or not. It is an
| The UDF should check for null input. Note: This parameter is treated as
| input only; any changes to the parameter value made by the UDF is
| ignored by DB2.
| SQL-result-ind
| This argument is set by the UDF before returning to DB2. The database
| provides the storage for the return value. The argument is defined as a
| two-byte signed integer. If set to a negative value, the database interprets
| the result of the function as null. If set to zero or a positive value, the
| database uses the value returned in SQL-result. The database provides the
| storage for the return value indicator. Since the parameter is passed by
| address, the address is of the storage where the indicator value should be
| placed.
| SQL-result
| This value is returned by the UDF. DB2 copies the value into database
| storage. In order to return the value correctly, the function code must be a
| value-returning function. The database copies only as much of the value as
| defined for the return value as specified on the CREATE FUNCTION
| statement. If the CAST FROM clause is used in the CREATE FUNCTION
| statement, DB2 assumes the UDF returns the value as defined in the CAST
| FROM clause, otherwise DB2 assumes the UDF returns the value as
| defined in the RETURNS clause.
| In order to be returned correctly, the return value defined in the function
| code must be defined as a simple structure. For example, in C, to return an
| INTEGER, the function code would be defined as follows:
| typedef struct {
| int rtnint;
| } rtnval_t;
|
| rtnval_t func1(short *parm1, short parmind[], short *rtnind)
| {
| rtnval_t rtnval;
| .
| .
| .
| return(rtnval);
| };
| Note: The return value must be defined as a simple structure. If, instead,
| the function was defined as simply int func1(...), the return value would not
| be available for DB2 to use.
| The following examples show how to define the UDF several different ways.
| v Using an SQL function
| CREATE FUNCTION SQUARE( inval INT) RETURNS INT
| LANGUAGE SQL
| BEGIN
| RETURN(inval*inval);
| END
| v Using an external function, parameter style SQL:
| The CREATE FUNCTION statement:
| CREATE FUNCTION SQUARE(INT) RETURNS INT CAST FROM FLOAT
| LANGUAGE C
| EXTERNAL NAME 'MYLIB/MATH(SQUARE)'
| DETERMINISTIC
| NO SQL
| NO EXTERNAL ACTION
| PARAMETER STYLE SQL
| ALLOW PARALLEL
| The code:
| void SQUARE(int *inval,
| double *outval,
| short *inind,
| short *outind,
| char *sqlstate,
| char *funcname,
| char *specname,
| char *msgtext)
| {
| if (*inind<0)
| *outind=-1;
| else
| {
| *outval=*inval;
| *outval=(*outval)*(*outval);
| *outind=0;
| }
| return;
| }
| The code:
| typedef struct {
| double outf;
| } outval_t;
|
| outval_t SQUARE(int *inval)
| {
| outval_t outval;
| outval.outf=*inval;
| outval.outf=(outval.outf)*(outval.outf);
| return(outval);
| }
| Example: Counter
| Suppose you want to simply number the rows in your SELECT statement. So you
| write a UDF which increments and returns a counter. This example uses an
| external function with DB2 SQL parameter style and a scratchpad.
| CREATE FUNCTION COUNTER()
| RETURNS INT
| SCRATCHPAD
| NOT DETERMINISTIC
| NO SQL
| NO EXTERNAL ACTION
| LANGUAGE C
| PARAMETER STYLE DB2SQL
| EXTERNAL NAME 'MYLIB/MATH(ctr)'
| DISALLOW PARALLELISM;
|
| /* structure scr defines the passed scratchpad for the function "ctr" */
| struct scr {
| long len;
| long countr;
| char not_used[96];
| };
|
| void ctr (
| long *out, /* output answer (counter) */
| short *outnull, /* output NULL indicator */
| char *sqlstate, /* SQL STATE */
| char *funcname, /* function name */
| char *specname, /* specific function name */
| char *mesgtext, /* message text insert */
| struct scr *scratchptr) { /* scratch pad */
|
| *out = ++scratchptr->countr; /* increment counter & copy out */
| *outnull = 0;
| return;
| }
| /* end of UDF : ctr */
Some dynamic SQL statements require use of address variables. RPG for AS/400
programs require the aid of PL/I, COBOL, C, or ILE RPG for AS/400 programs to
manage the address variables.
The examples in this chapter are PL/I examples. The following table shows all the
statements supported by DB2 UDB for AS/400 and indicates if they can be used in
a dynamic application.
Note: In the following table, the numbers in the Dynamic SQL column correspond
to the notes on the next page.
Table 22. List of SQL Statements Allowed in Dynamic Applications
SQL Statement Static SQL Dynamic SQL
ALTER TABLE Y Y
BEGIN DECLARE SECTION Y N
CALL Y Y
CLOSE Y N
COMMENT ON Y Y
COMMIT Y Y
CONNECT Y N
CREATE ALIAS Y Y
Notes:
1. Cannot be prepared, but used to run prepared SQL statements. The SQL
statement must be previously prepared by the PREPARE statement prior to
using the EXECUTE statement. See example for PREPARE under “Using the
PREPARE and EXECUTE Statements” on page 200.
2. Cannot be prepared, but used with dynamic statement strings that do not have
any ? parameter markers. The EXECUTE IMMEDIATE statement causes the
statement strings to be prepared and run dynamically at program run time. See
example for EXECUTE IMMEDIATE under “Processing Non-SELECT
statements”.
3. Cannot be prepared, but used to parse, optimize, and set up dynamic SELECT
statements prior to running. See example for PREPARE under “Processing
Non-SELECT statements”.
4. Cannot be prepared, but used to define the cursor for the associated dynamic
SELECT statement prior to running.
5. A SELECT INTO statement cannot be prepared or used in EXECUTE
IMMEDIATE.
6. Cannot be used with EXECUTE or EXECUTE IMMEDIATE but can be prepared
and used with OPEN.
7. Cannot be prepared, but used to return a description of a prepared statement.
8. Can only be run using the Run SQL Statements (RUNSQLSTM) command.
9. Can only be used when running a REXX procedure.
There are two basic types of dynamic SQL statements: SELECT statements and
non-SELECT statements. Non-SELECT statements include such statements as
DELETE, INSERT, and UPDATE.
Client server applications that use interfaces such as ODBC typically use dynamic
SQL to access the database. For more information on developing client server
applications that use Client Access, see the Client Access for Windows 3.1 ODBC
User’s Guide.
Dynamic SQL statements are processed using the CCSID of the statement text.
This affects variant characters the most. For example, the not sign (¬) is located at
'BA'X in CCSID 500. This means that if the CCSID of your statement text is 500,
SQL expects the not sign (¬) to be located at 'BA'X.
If the statement text CCSID is 65535, SQL processes variant characters as if they
had a CCSID of 37. This means that SQL looks for the not sign (¬) at '5F'X.
The PREPARE statement prepares the non-SELECT statement (for example, the
DELETE statement) and gives it a name of your choosing. If DLYPRP (*YES) is
specified on the CRTSQLxxx command, the preparation is delayed until the first
time the statement is used in an EXECUTE or DESCRIBE statement, unless the
USING clause is specified on the PREPARE statement. In this instance, let us call it
S1. After the statement has been prepared, it can be run many times within the
same program, using different values for the parameter markers. The following
example is of a prepared statement being run multiple times:
DSTRING = 'DELETE FROM CORPDATA.EMPLOYEE WHERE EMPNO = ?';
END;
In routines similar to the example above, you must know the number of parameter
markers and their data types, because the host variables that provide the input data
are declared when the program is being written.
Note: All prepared statements that are associated with an application server are
destroyed whenever the connection to the application server ends.
Connections are ended by a CONNECT (Type 1) statement, a
DISCONNECT statement, or a RELEASE followed by a successful COMMIT.
You can use fixed-list dynamic SELECT statements with any SQL-supported
application program.
For example:
MOVE 'SELECT EMPNO, LASTNAME FROM CORPDATA.EMPLOYEE WHERE EMPNO>?'
TO DSTRING.
EXEC SQL
PREPARE S2 FROM :DSTRING END-EXEC.
EXEC SQL
DECLARE C2 CURSOR FOR S2 END-EXEC.
EXEC SQL
OPEN C2 USING :EMP END-EXEC.
EXEC SQL
CLOSE C2 END-EXEC.
STOP-RUN.
FETCH-ROW.
EXEC SQL
FETCH C2 INTO :EMP, :EMPNAME END-EXEC.
Note: Remember that because the SELECT statement, in this case, always returns
the same number and type of data items as previously run fixed-list SELECT
statements, you do not have to use the SQL descriptor area (SQLDA).
Varying-List Select-Statements
In dynamic SQL, varying-list SELECT statements are ones for which the number
and format of result columns to be returned are not predictable; that is, you do not
know how many variables you need, or what the data types are. Therefore, you
cannot define host variables in advance to accommodate the result columns
returned.
Note: In REXX, steps 5.b on page 203, 6 on page 203, and 7 on page 203 are not
applicable.
If your application accepts varying-list SELECT statements, your program has to:
1. Place the input SQL statement into a host variable.
2. Issue a PREPARE statement to validate the dynamic SQL statement and put it
into a form that can be run. If DLYPRP (*YES) is specified on the CRTSQLxxx
command, the preparation is delayed until the first time the statement is used
in an EXECUTE or DESCRIBE statement, unless the USING clause is
specified on the PREPARE statement.
3. Declare a cursor for the statement name.
4. Open the cursor (declared in step 3) that includes the name of the dynamic
SELECT statement.
5. Issue a DESCRIBE statement to request information from SQL about the type
and size of each column of the result table.
Notes:
a. You can also code the PREPARE statement with an INTO clause to
perform the functions of PREPARE and DESCRIBE with a single
statement.
The SQLDA is a collection of variables required for running the DESCRIBE and
DESCRIBE TABLE statements. It also can be used on the PREPARE, OPEN,
FETCH, CALL, and EXECUTE statements. An SQLDA is used with dynamic SQL. It
can be used in a DESCRIBE statement, changed with the addresses of host
variables, and then reused in a FETCH statement.
The meaning of the information in an SQLDA depends on its use. In PREPARE and
DESCRIBE, an SQLDA provides information to an application program about a
prepared statement. In DESCRIBE TABLE, the SQLDA provides information to an
application program about the columns in a table or view. In OPEN, EXECUTE,
CALL, and FETCH, an SQLDA provides information about host variables.
If your application lets you have several cursors open at the same time, you can
code several SQLDAs, one for each dynamic SELECT statement. For more
information on SQLDA and SQLCA, see the DB2 UDB for AS/400 SQL Reference
book.
SQLDAs can be used in C, COBOL, PL/I, REXX, and RPG. Because RPG for
AS/400 does not provide a way to set pointers, pointers must be set outside the
RPG for AS/400 program by a PL/I, C, COBOL, or ILE RPG for AS/400 program.
Since the area used must be declared by the PL/I, C, COBOL, or ILE RPG for
AS/400 program, that program must call the RPG for AS/400 program.
Note: The SQLDA in REXX is different. For more information, see Chapter 17.
Coding SQL Statements in REXX Applications.
When an SQLDA is used in OPEN, FETCH, CALL, and EXECUTE, each
occurrence of SQLVAR describes a host variable.
The variables of SQLDA are as follows (variable names are in lowercase for C):
SQLDAID
SQLDAID is used for storage dumps. Byte 7 of SQLDAID is used to
indicate if there are extension SQL VARs used for LOBs or UDTs. It is a
string of 8 characters that have the value 'SQLDA' after the SQLDA that is
used in a PREPARE or DESCRIBE statement. It is not used for FETCH,
OPEN, CALL or EXECUTE.
| Byte 7 can be used to determine if more than one SQLVAR entry is needed
| for each column. This flag is set to a blank if there are not any LOBs or
| distinct types.
SQLDAID is not applicable in REXX.
SQLDABC
SQLDABC indicates the length of the SQLDA. It is a 4-byte integer that has
the value SQLN*LENGTH(SQLVAR) + 16 after the SQLDA is used in a
PREPARE or DESCRIBE statement. SQLDABC must have a value equal to
or greater than SQLN*LENGTH(SQLVAR) + 16 prior to use by FETCH,
OPEN, CALL, or EXECUTE.
SQLABC is not applicable in REXX.
SQLN SQLN is a 2-byte integer that specifies the total number of occurrences of
SQLVAR. It must be set prior to use by any SQL statement to a value
greater than or equal to 0.
SQLN is not applicable in REXX.
SQLD SQLD is a 2-byte integer that specifies the pertinent number of occurrences
of SQLVAR; that is, the number of host variables described by the SQLDA.
This field is set by SQL on a DESCRIBE or PREPARE statement. In other
statements, this field must be set prior to use to a value greater than or
equal to 0 and less than or equal to SQLN.
SQLVAR
The variables of SQLVAR are SQLTYPE, SQLLEN, SQLRES, SQLDATA,
SQLIND, and SQLNAME. These variables are set by SQL on a DESCRIBE
or PREPARE statement. In other statements, they must be set prior to use.
These variables are defined as follows:
SQLTYPE
SQLTYPE is a 2-byte integer that specifies the data type of the host
variable as shown in the table below. Odd values for SQLTYPE show that
the host variable has an associated indicator variable addressed by
SQLIND.
SQLLEN
SQLLEN is a 2-byte integer variable that specifies the length attributes of
the host variables shown in Figure 10-2.
SQLRES
SQLRES is a 12-byte reserved area for boundary alignment purposes. Note
that, in OS/400, pointers must be on a quad-word boundary.
SQLRES is not applicable in REXX.
SQLDATA
SQLDATA is a 16-byte pointer variable that specifies the address of the
host variables when the SQLDA is used on OPEN, FETCH, CALL, and
EXECUTE.
When the SQLDA is used on PREPARE and DESCRIBE, this area is
overlaid with the following information:
The CCSID of a character, date, time, timestamp, and graphic field is stored
in the third and fourth bytes of SQLDATA. For BIT data, the CCSID is
65535. In REXX, the CCSID is returned in the variable SQLCCSID.
SQLIND
SQLIND is a 16-byte pointer that specifies the address of a small integer
host variable that is used as an indication of null or not null when the
SQLDA is used on OPEN, FETCH, CALL, and EXECUTE. A negative value
indicates null and a non-negative indicates not null. This pointer is only
used if SQLTYPE contains an odd value.
When the SQLDA is used on PREPARE and DESCRIBE, this area is
reserved for future use.
SQLNAME
SQLNAME is a variable-length character variable with a maximum length of
30, which contains the name of selected column, label, or system column
name after a PREPARE or DESCRIBE. In OPEN, FETCH, EXECUTE, or
CALL, it can be used to pass the CCSID of character strings. CCSIDs can
be passed for character, graphic, date, time, and timestamp host variables.
The SQLNAME field in an SQLVAR array entry of an input SQLDA can be
set to specify the CCSID:
5. Binary numbers can be represented in the SQLDA as either lengths 2 or 4, or with the precision in byte 1 and the scale in byte 2.
If the first byte is greater than X’00’, it indicates precision and scale.
6. The DataLink datatype is only returned on DESCRIBE TABLE.
7. The len.sqllonglen field in the secondary SQLVAR contains the length attribute of the column.
The default for graphic host variables is the associated double-byte CCSID
for the job CCSID. If an associated double-byte CCSID does not exist,
65535 is used.
| SQLVAR2
| The Extended SQLVAR structure. Extended SQLVARs are only needed (for
| all columns of the result) if the result includes any LOB or distinct type
| columns. For distinct types, they contain the distinct type name. For LOBs,
| they contain the length attribute of the host variable and a pointer to the
| buffer that contains the actual length. If locators are used to represent
| LOBs, these entries are not necessary. The number of Extended SQLVAR
| occurrences needed depends on the statement that the SQLDA was
| provided for and the data types of the columns or parameters being
| described. Byte 7 of SQLDAID is always set to the number of sets of
| SQLVARs necessary.
| If SQLD is not set to a sufficient number of SQLVAR occurrences:
| v SQLD is set to the total number of SQLVAR occurrences needed for all
| sets.
| v A +237 warning is returned in the SQLCODE field of the SQLCA if at
| least enough were specified for the Base SQLVAR Entries. The Base
| SQLVAR entries are returned, but no Extended SQLVARs are returned.
| v A +239 warning is returned in the SQLCODE field of the SQLCA if
| enough SQLVARs were not specified for even the Base SQLVAR Entries.
| No SQLVAR entries are returned.
| SQLLONGLEN
| SQLLONGLEN is part of the Extended SQLVAR. It is a 4-byte integer
| variable that specifies the length attributes of a LOB (BLOB, CLOB, or
| DBCLOB) host variable.
| SQLDATALEN
| SQLDATALEN is part of the Extended SQLVAR. It is a 16-byte pointer
| variable that specifies the address of the length of the host variable. It is
| used for LOB (BLOB, CLOB, and DBCLOB) host variables only. If this field
| is NULL, then the actual length is stored in the 4 bytes immediately before
| the start of the data, and SQLDATA points to the first byte of the field
| length. The actual length indicates the number of bytes for a BLOB or
| CLOB, and the number of characters for a DBCLOB.
| If this field is not NULL, it contains a pointer to a 4-byte long buffer that
| contains the actual length in bytes (even for DBCLOB) of the data in the
| buffer pointed to from the SQLDATA field in the matching base SQLVAR.
Note: The SELECT statement has no INTO clause. Dynamic SELECT statements
must not have an INTO clause, even if they return only one row.
When the statement is read, it is assigned to a host variable. The host variable (for
example, named DSTRING) is then processed, using the PREPARE statement, as
shown:
EXEC SQL
PREPARE S1 FROM :DSTRING;
Allocating Storage
You can allocate storage for the SQLDA. (Allocating storage is not necessary in
REXX.) The techniques for acquiring storage are language dependent. The SQLDA
must be allocated on a 16-byte boundary. The SQLDA consists of a fixed-length
header, 16 bytes long. The header is followed by a varying-length array section
(SQLVAR), each element of which is 80 bytes in length. The amount of storage you
need to allocate depends on how many elements you want to have in the SQLVAR
array. Each column you select must have a corresponding SQLVAR array element.
Therefore, the number of columns listed in your SELECT statement determines how
many SQLVAR array elements you should allocate. Because SELECT statements
are specified at run time, however, it is impossible to know how many columns will
be accessed. Consequently, you must estimate the number of columns. Suppose, in
this example, that no more than 20 columns are ever expected to be accessed by a
single SELECT statement. This means that the SQLVAR array should have a
dimension of 20 (for an SQLDA size 20 x 80, or 1600, plus 16 for a total of 1616
bytes), because each item in the select-list must have a corresponding entry in
SQLVAR.
Having allocated what you estimated to be enough space for your SQLDA in the
SQLN field of the SQLDA, set an initial value equal to the number of SQLVAR array
elements. In the following example, set SQLN to 20:
Allocate space for an SQLDA of 1616 bytes on a quadword boundary
SQLN = 20;
Note: In PL/I the ALLOCATE statement is the only way to ensure the allocation of
a quadword boundary.
When the DESCRIBE statement is run, SQL places values in the SQLDA that
provide information about the select-list. The following Figure 9 shows the contents
of the SQLDA after the DESCRIBE is run:
SQLDA Size
453 3 (reserved)
37
SQLVAR
Element 1 0
(80 bytes)
8 WORKDEPT
453 4 (reserved)
37
SQLVAR
Element 2 0
(80 bytes)
7 P H O N E N O
RV3W188-0
Your program might have to alter the SQLN value if the SQLDA is not large enough
to contain the described SQLVAR elements. For example, let the SELECT
statement contain 27 select-list expressions instead of the 20 or less that you
estimated. Because the SQLDA was only allocated with an SQLVAR dimension of
20 elements, SQL cannot describe the select-list, because the SQLVAR has too
many elements. SQL sets the SQLD to the actual number of columns specified by
the SELECT statement, and the remainder of the structure is ignored. Therefore,
after a DESCRIBE, you should compare the SQLN to the SQLD. If the value of
SQLD is greater than the value of SQLN, allocate a larger SQLDA based on the
value in SQLD, as follows:
EXEC SQL
DESCRIBE S1 INTO :SQLDA;
IF SQLN <= SQLD THEN
DO;
EXEC SQL
DESCRIBE S1 INTO :SQLDA;
END;
Your program must now analyze the elements of SQLVAR. Remember that each
element describes a single select-list expression. Consider again the SELECT
statement that is being processed:
SELECT WORKDEPT, PHONENO
FROM CORPDATA.EMPLOYEE
WHERE LASTNAME = 'PARKER'
The first item in the select-list is WORKDEPT. At the beginning of this section, we
identified that each SQLVAR element contains the fields SQLTYPE, SQLLEN,
SQLRES, SQLDATA, SQLIND, and SQLNAME. SQL returns, in the SQLTYPE field,
a code that describes the data type of the expressions and whether nulls are
applicable or not.
For example, SQL sets SQLTYPE to 453 in SQLVAR element 1 (see Figure 9 on
page 209). This specifies that WORKDEPT is a fixed-length character string
(CHAR) column and that nulls are permitted in the column.
SQL sets SQLLEN to the length of the column. Because the data type of
WORKDEPT is CHAR, SQL sets SQLLEN equal to the length of the character
string. For WORKDEPT, that length is 3. Therefore, when the SELECT statement is
later run, a storage area large enough to hold a CHAR(3) string is needed.
Because the data type of WORKDEPT is CHAR FOR SBCS DATA, the first 4 bytes
of SQLDATA were set to the CCSID of the character column (see Figure 9 on
page 209). The last field in an SQLVAR element is a varying-length character string
called SQLNAME. The first 2 bytes of SQLNAME contain the length of the
character data. The character data itself is usually the name of a column used in
the SELECT statement (WORKDEPT in the above example.) The exceptions to this
are select-list items that are unnamed, such as functions (for example,
SUM(SALARY)), expressions (for example, A+B−C), and constants. In these cases,
SQLNAME is an empty string. SQLNAME can also contain a label rather than a
name. One of the parameters associated with the PREPARE and DESCRIBE
statements is the USING clause. You can specify it this way:
EXEC SQL
DESCRIBE S1 INTO:SQLDA
USING LABELS;
If you specify NAMES (or omit the USING parameter entirely), only column names
are placed in the SQLNAME field. If you specify SYSTEM NAMES, only the system
column names are placed in the SQLNAME field. If you specify LABELS, only
labels associated with the columns listed in your SQL statement are entered here. If
you specify ANY, labels are placed in the SQLNAME field for those columns that
have labels; otherwise, the column names are entered. If you specify BOTH, names
and labels are both placed in the field with their corresponding lengths. If you
specify BOTH, however, you must remember to double the size of the SQLVAR
array because you are including twice the number of elements. If you specify ALL,
column names, labels, and system column names are placed in the field with their
corresponding lengths. If you specify ALL, remember to triple the size of the
SQLVAR array. If you specify ALL:
v Names, and labels are placed in the field with their corresponding lengths.
In the example, the second SQLVAR element contains the information for the
second column used in the select: PHONENO. The 453 code in SQLTYPE specifies
that PHONENO is a CHAR column. For a CHAR data type of length 4, SQL sets
SQLLEN to 4.
After analyzing the result of the DESCRIBE, you can allocate storage for variables
containing the result of the SELECT statement. For WORKDEPT, a character field
of length 3 must be allocated; for PHONENO, a character field of length 4 must be
allocated.
After the storage is allocated, you must set SQLDATA and SQLIND to point to the
appropriate areas. For each element of the SQLVAR array, SQLDATA points to the
place where the results are to be put. SQLIND points to the place where the null
indicator is to be put. The following figure shows what the structure looks like now:
SQLDA Size
453 3 (reserved)
Address of FLDA
SQLVAR
Element 1 Address of FLDAI FLDB: (CHAR(4))
(80 bytes) 8 WORKDEPT
RV3W189-0
As you can see, the only difference is that the name of the prepared SELECT
statement (S1) is used instead of the SELECT statement itself. The actual retrieval
of result rows is made as follows:
EXEC SQL
OPEN C1;
EXEC SQL
FETCH C1 USING DESCRIPTOR :SQLDA;
DO WHILE (SQLCODE = 0);
/*Display ... the results pointed to by SQLDATA*/
END;
/*Display ('END OF LIST')*/
EXEC SQL
CLOSE C1;
The cursor is opened, and the result table is evaluated. Notice that there are no
input host variables needed for the example SELECT statement. The SELECT
result rows are then returned using FETCH. On the FETCH statement, there is no
list of output host variables. Rather, the FETCH statement tells SQL to return results
into areas described by the descriptor called SQLDA. The same SQLDA that was
set up by DESCRIBE is now being used for the output of the SELECT statement. In
particular, the results are returned into the storage areas pointed to by the
SQLDATA and SQLIND fields of the SQLVAR elements. The following figure shows
what the structure looks like after the FETCH statement has been processed.
SQLDA Size
Address of FLDA
SQLVAR
Element 1 Address of FLDAI FLDB: (CHAR(4))
(80 bytes) 8 WORKDEPT 4502
RV3W190-0
The meaning of the SMALLINT pointed to by SQLIND is the same as any other
indicator variable:
Note: Unless HOLD is specified, dynamic cursors are closed during COMMIT or
ROLLBACK.
If you want to run the same SELECT statement several times, using different values
for LASTNAME, you can use an SQL statement such as PREPARE or EXECUTE
(as described in “Using the PREPARE and EXECUTE Statements” on page 200)
like this:
SELECT WORKDEPT, PHONENO FROM CORPDATA.EMPLOYEE WHERE LASTNAME = ?
When your parameters are not predictable, your application cannot know the
number or types of the parameters until run time. You can arrange to receive this
information at the time your application is run, and by using a USING
DESCRIPTOR on the OPEN statement, you can substitute the values contained in
specific host variables for the parameter markers included in the WHERE clause of
the SELECT statement.
To code such a program, you need to use the OPEN statement with the USING
DESCRIPTOR clause. This SQL statement is used to not only open a cursor, but to
replace each parameter marker with the value of the corresponding host variable.
The descriptor name that you specify with this statement must identify an SQLDA
that contains a valid description of those host variables. This SQLDA, unlike those
previously described, is not used to return information on data items that are part of
a SELECT list. That is, it is not used as output from a DESCRIBE statement, but as
input to the OPEN statement. It provides information on host variables that are used
to replace parameter markers in the WHERE clause of the SELECT statement. It
gets this information from the application, which must be designed to place
appropriate values into the necessary fields of the SQLDA. The SQLDA is then
ready to be used as a source of information for SQL in the process of replacing
parameter markers with host variable data.
When you use the SQLDA for input to the OPEN statement with the USING
DESCRIPTOR clause, not all of its fields have to be filled in. Specifically, SQLDAID,
SQLRES, and SQLNAME can be left blank (SQLNAME (SQLCCSID in REXX) can
be set if a specific CCSID is needed.) Therefore, when you use this method for
replacing parameter markers with host variable values, you need to determine:
v How many ? parameter markers are there?
v What are the data types and attributes of these parameters markers (SQLTYPE,
SQLLEN, and SQLNAME)?
v Do you want an indicator variable?
A host structure is a group of host variables used as the source or target for a set
of selected values (for example, the set of values for the columns of a row). A host
structure array is an array of host structures used in the multiple-row FETCH and
blocked INSERT statements.
Note: By using a host variable instead of a literal value in an SQL statement, you
give the application program the flexibility it needs to process different rows
in a table or view.
In this example, the host variable CBLEMPNO receives the value from EMPNO,
CBLNAME receives the value from LASTNAME, and CBLDEPT receives the
value from WORKDEPT.
3. As a value in a SELECT clause: When specifying a list of items in the
SELECT clause, you are not restricted to the column names of tables and
views. Your program can return a set of column values intermixed with host
variable values and literal constants. For example:
MOVE '000220' TO PERSON.
EXEC SQL
SELECT "A", LASTNAME, SALARY, :RAISE,
SALARY + :RAISE
INTO :PROCESS, :PERSON-NAME, :EMP-SAL,
:EMP-RAISE, :EMP-TTL
FROM CORPDATA.EMPLOYEE
WHERE EMPNO = :PERSON
END-EXEC.
For more information on these statements, see the DB2 UDB for AS/400 SQL
Reference book.
Assignment Rules
SQL column values are set to (or assigned to) host variables during the running of
FETCH and SELECT INTO statements. SQL column values are set from (or
assigned from) host variables during the running of INSERT, UPDATE, and CALL
statements. All assignment operations observe the following rules:
v Numbers and strings are not compatible:
Numbers cannot be assigned to string columns or string host variables.
Strings cannot be assigned to numeric columns or numeric host variables.
v All character and DBCS graphic strings are compatible with UCS-2 graphic
columns if conversion is supported between the CCSIDs. All graphic strings are
compatible if the CCSIDs are compatible. All numeric values are compatible.
Conversions are performed by SQL whenever necessary. All character and
DBCS graphic strings are compatible with UCS-2 graphic columns for
assignment operations, if conversion is supported between the CCSIDs. For the
CALL statement, character and DBCS graphic parameters are compatible with
UCS-2 parameters if conversion is supported.
v A null value cannot be assigned to a host variable that does not have an
associated indicator variable.
8. A DBCS-open or DBCS-either column or variable is a variable that was declared in the host language by including the definition of
an externally described file. DBCS-open variables are also declared if the job CCSID indicates MIXED data, or the DECLARE
VARIABLE statement is used and a MIXED CCSID or the FOR MIXED DATA clause is specified. See DECLARE VARIABLE in the
DB2 UDB for AS/400 SQL Reference book.
Chapter 11. Common Concepts and Rules for Using SQL with Host Languages 217
v If the sub-type for the source or target is BIT, the value is assigned without
conversion.
v If the value is either null or an empty string, the value is assigned without
conversion.
v If conversion is not defined between specific CCSIDs, the value is not assigned
and an error message is issued.
v If conversion is defined and needed, the source value is converted to the CCSID
of the target before the assignment is performed.
For more information on CCSIDs, see the International Application Development
book.
Indicator Variables
An indicator variable is a halfword integer variable used to indicate whether its
associated host variable has been assigned a null value:
v If the value for the result column is null, SQL puts a -1 in the indicator variable.
v If you do not use an indicator variable and the result column is a null value, a
negative SQLCODE is returned.
v If the value for the result column causes a data mapping error. SQL sets the
indicator variable to −2.
You can also use an indicator variable to verify that a retrieved string value has not
been truncated. If truncation occurs, the indicator variable contains a positive
integer that specifies the original length of the string.
When the database manager returns a value from a result column, you can test the
indicator variable. If the value of the indicator variable is less than zero, you know
the value of the results column is null. When the database manager returns a null
value, the host variable will be set to the default value for the result column.
You specify an indicator variable (preceded by a colon) immediately after the host
variable or immediately after the keyword INDICATOR. For example:
EXEC SQL
SELECT COUNT(*), AVG(SALARY)
INTO :PLICNT, :PLISAL:INDNULL
FROM CORPDATA.EMPLOYEE
WHERE EDLEVEL < 18
END-EXEC.
You can then test INDNULL to see if it contains a negative value. If it does, you
know SQL returned a null value.
Chapter 11. Common Concepts and Rules for Using SQL with Host Languages 219
Always test for NULL in a column by using the IS NULL predicate. For example:
WHERE expression IS NULL
The EQUAL predicate will always be evaluated as false when it compares a null
value. The result of this example will select no rows.
In the above example, SQL selects the column values of the row into a host
structure. Therefore, you must use a corresponding structure for the indicator
variables to determine which (if any) selected column values are null.
For example, you can specify that a value be put in a column (using an INSERT or
UPDATE statement), but you may not be sure that the value was specified with the
input data. To provide the capability to set a column to a null value, you can write
the following statement:
When NEWPHONE contains other than a null value, set PHONEIND to zero by
preceding the statement with:
MOVE 0 to PHONEIND.
Otherwise, to tell SQL that NEWPHONE contains a null value, set PHONEIND to a
negative value, as follows:
MOVE -1 TO PHONEIND.
Note: There are situations when a zero SQLCODE is returned to your program and
the result might not be satisfactory. For example, if a value was truncated as
a result of running your program, the SQLCODE returned to your program is
zero. However, one of the SQL warning flags (SQLWARN1) indicates
truncation. In this case, the SQLSTATE is not '00000'.
The main purpose for SQLSTATE is to provide common return codes for common
return conditions among the different IBM relational database systems. SQLSTATEs
are particularly useful when handling problems with distributed database operations.
For more information, see the DB2 UDB for AS/400 SQL Reference book.
Chapter 11. Common Concepts and Rules for Using SQL with Host Languages 221
For more information about the SQLCA, see Appendix B, “SQL Communication
Area” in the DB2 UDB for AS/400 SQL Reference book. For a listing of DB2 UDB
for AS/400 SQLCODEs and SQLSTATEs, see Appendix B.
The WHENEVER statement allows you to specify what you want to do whenever a
general condition is true. You can specify more than one WHENEVER statement for
the same condition. When you do this, the first WHENEVER statement applies to all
subsequent SQL statements in the source program until another WHENEVER
statement is specified.
For example, if you are retrieving rows using a cursor, you expect that SQL will
eventually be unable to find another row when the FETCH statement is issued. To
prepare for this situation, specify a WHENEVER NOT FOUND GO TO ... statement
to cause SQL to branch to a place in the program where you issue a CLOSE
statement in order to close the cursor properly.
Note: A WHENEVER statement affects all subsequent source SQL statements until
another WHENEVER is encountered.
In other words, all SQL statements coded between two WHENEVER statements (or
following the first, if there is only one) are governed by the first WHENEVER
statement, regardless of the path the program takes.
Because of this, the WHENEVER statement must precede the first SQL statement it
is to affect. If the WHENEVER follows the SQL statement, the branch is not taken
on the basis of the value of the SQLCODE and SQLSTATE set by that SQL
statement. However, if your program checks the SQLCODE or SQLSTATE directly,
the check must be done after the SQL statement is run.
The WHENEVER statement does not provide a CALL to a subroutine option. For
this reason, you might want to examine the SQLCODE or SQLSTATE value after
each SQL statement is run and call a subroutine, rather than use a WHENEVER
statement.
Chapter 11. Common Concepts and Rules for Using SQL with Host Languages 223
224 DB2 UDB for AS/400 SQL Programming V4R4
Chapter 12. Coding SQL Statements in C and C++
Applications
This chapter describes the unique application and coding requirements for
embedding SQL statements in a C or C++ program. C program refers to ILE C for
AS/400 programs. C++ program refers to ILE C++ programs or programs that are
created with the VisualAge C++ for AS/400 compiler. This chapter also defines the
requirements for host structures and host variables.
Or,
v An SQLCA (which contains an SQLCODE and SQLSTATE variable).
The SQLCODE and SQLSTATE values are set by the database manager after each
SQL statement is executed. An application can check the SQLCODE or SQLSTATE
value to determine whether the last SQL statement was successful.
You can code the SQLCA in a C or C++ program either directly or by using the SQL
INCLUDE statement. Using the SQL INCLUDE statement requests the inclusion of
a standard declaration:
EXEC SQL INCLUDE SQLCA ;
A standard declaration includes both a structure definition and a static data area
named 'sqlca'.
The SQLCODE, SQLSTATE, and SQLCA variables must appear before any
executable statements. The scope of the declaration must include the scope of all
SQL statements in the program.
The included C and C++ source statements for the SQLCA are:
#ifndef SQLCODE
struct sqlca {
unsigned char sqlcaid[8];
long sqlcabc;
long sqlcode;
short sqlerrml;
unsigned char sqlerrmc[70];
unsigned char sqlerrp[8];
long sqlerrd[6];
unsigned char sqlwarn[11];
unsigned char sqlstate[5];
};
#define SQLCODE sqlca.sqlcode
#define SQLWARN0 sqlca.sqlwarn[0]
#define SQLWARN1 sqlca.sqlwarn[1]
#define SQLWARN2 sqlca.sqlwarn[2]
#define SQLWARN3 sqlca.sqlwarn[3]
When a declare for SQLCODE is found in the program and the precompiler
provides the SQLCA, SQLCADE replaces SQLCODE. When a declare for
SQLSTATE is found in the program and the precompiler provides the SQLCA,
SQLSTOTE replaces SQLSTATE.
Note: Many SQL error messages contain message data that is of varying length.
The lengths of these data fields are embedded in the value of the SQLCA
sqlerrmc field. Because of these lengths, printing the value of sqlerrmc from
a C or C++ program might give unpredictable results.
Unlike the SQLCA, more than one SQLDA can be in the program, and an SQLDA
can have any valid name. You can code an SQLDA in a C or C++ program either
directly or by using the SQL INCLUDE statement. Using the SQL INCLUDE
statement requests the inclusion of a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA;
A standard declaration includes only a structure definition with the name ’sqlda’.
C and C++ declarations that are included for the SQLDA are:
#ifndef SQLDASIZE
struct sqlda {
unsigned char sqldaid[8];
long sqldabc;
short sqln;
short sqld;
struct sqlvar {
short sqltype;
short sqllen;
unsigned char *sqldata;
short *sqlind;
struct sqlname {
One benefit from using the INCLUDE SQLDA SQL statement is that you also get
the following macro definition:
#define SQLDASIZE(n) (sizeof(struct sqlda) + (n-1)* sizeof(struc sqlvar))
This macro makes it easy to allocate storage for an SQLDA with a specified number
of SQLVAR elements. In the following example, the SQLDASIZE macro is used to
allocate storage for an SQLDA with 20 SQLVAR elements.
#include <stdlib.h>
EXEC SQL INCLUDE SQLDA;
When you have declared an SQLDA as a pointer, you must reference it exactly as
declared when you use it in an SQL statement, just as you would for a host variable
that was declared as a pointer. To avoid compiler errors, the type of the value that
is assigned to the sqldata field of the SQLDA must be a pointer of unsigned
character. This helps avoid compiler errors. The type casting is only necessary for
the EXECUTE, OPEN, CALL, and FETCH statements where the application
program is passing the address of the host variables in the program. For example,
if you declared a pointer to an SQLDA called mydaptr, you would use it in a
PREPARE statement as:
EXEC SQL PREPARE mysname INTO :*mydaptr FROM :mysqlstring;
For more information on the SQLDA, see the DB2 UDB for AS/400 SQL Reference
book.
Comments
In addition to using SQL comments (--), you can include C comments (/*...*/) within
embedded SQL statements whenever a blank is allowed, except between the
keywords EXEC and SQL. Comments can span any number of lines. You cannot
nest comments. You can use single-line comments (comments that start with //) in
C++, but you cannot use them in C.
Constants containing DBCS data may be continued across multiple lines in two
ways:
v If the character at the right margin of the continued line is a shift-in and the
character at the left margin of the continuation line is a shift-out, then the shift
characters located at the left and right margin are removed.
This SQL statement has a valid graphic constant of
G’<AABBCCDDEEFFGGHHIIJJKK>’. The redundant shifts at the margin are
removed.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....*....8
EXEC SQL SELECT * FROM GRAPHTAB WHERE GRAPHCOL = G'<AABBCCDDEEFFGGHH>
<IIJJKK>';
v If you are not using the default margins of 1 and 80, it is possible to place the
shift characters outside of the margins. For this example, assume the margins
are 5 and 75. This SQL statement has a valid graphic constant of
G’<AABBCCDDEEFFGGHHIIJJKK>’.
*...(....1....+....2....+....3....+....4....+....5....+....6....+....7....)....8
EXEC SQL SELECT * FROM GRAPHTAB WHERE GRAPHCOL = G'<AABBCCDD>
<EEFFGGHHIIJJKK>';
Including Code
You can include SQL statements, C, or C++ statements by embedding the following
SQL statement in the source code:
EXEC SQL INCLUDE member-name;
You cannot use C and C++ #include statements to include SQL statements or
declarations of C or C++ host variables that are referred to in SQL statements.
Names
You can use any valid C or C++ variable name for a host variable. It is subject to
the following restrictions:
Do not use host variable names or external entry names that begin with 'SQL',
'RDI', or 'DSN' in any combination of uppercase or lowercase letters. These names
are reserved for the database manager. The length of host variable names is limited
to 64.
Statement Labels
Executable SQL statements can be preceded with a label.
Preprocessor Sequence
You must run the SQL preprocessor before the C or C++ preprocessor. You cannot
use C or C++ preprocessor directives within SQL statements.
Trigraphs
Some characters from the C and C++ character set are not available on all
keyboards. You can enter these characters into a C or C++ source program by
using a sequence of three characters that is called a trigraph. The following trigraph
sequences are supported within host variable declarations:
v ??( left bracket
v ??) right bracket
v ??< left brace
v ??> right brace
v ??= pound
v ??/ backslash
In C, the C statements that are used to define the host variables should be
preceded by a BEGIN DECLARE SECTION statement and followed by an END
DECLARE SECTION statement. If a BEGIN DECLARE SECTION and END
DECLARE SECTION are specified, all host variable declarations used in SQL
statements must be between the BEGIN DECLARE SECTION and the END
DECLARE SECTION statements.
In C++, the C++ statements that are used to define the host variables must be
preceded by a BEGIN DECLARE SECTION statement and followed by an END
DECLARE SECTION statement. You cannot use any variable that is not between
the BEGIN DECLARE SECTION statement and the END DECLARE SECTION
statement as a host variable.
All host variables within an SQL statement must be preceded by a colon (:).
The names of host variables must be unique within the program, even if the host
variables are in different blocks or procedures.
An SQL statement that uses a host variable must be within the scope of the
statement in which the variable was declared.
ÊÊ Ê
auto const
extern volatile
static
Ê float Ê
double
decimal ( precision )
, scale
long
signed short int
Ê · variable-name ; ÊÍ
= expression
Notes:
1. Precision and scale must be integer constants. Precision may be in the range
from 1 to 31. Scale may be in the range from 0 to the precision.
2. If using the decimal data type, the header file decimal.h must be included.
Single-Character Form
ÊÊ char Ê
auto const unsigned
extern volatile signed
static
Ê · variable-name ; ÊÍ
[ 1 ] = expression
ÊÊ char Ê
auto const unsigned
extern volatile signed
static
Ê · variable-name [ length ] ; ÊÍ
= expression
Notes:
1. The length must be an integer constant that is greater than 1 and not greater
than 32741.
2. If the *CNULRQD option is specified on the CRTSQLCI, CRTSQLCPPI, or
CVTSQLCPP command, the input host variables must contain the
NUL-terminator. Output host variables are padded with blanks, and the last
character is the NUL-terminator. If the output host variable is too small to
contain both the data and the NUL-terminator, the following actions are taken:
v The data is truncated
v The last character is the NUL-terminator
v SQLWARN1 is set to ’W’
3. If the *NOCNULRQD option is specified on the CRTSQLCI, CRTSQLCPPI, or
CVTSQLCPP command, the input variables do not need to contain the
NUL-terminator.
The following applies to output host variables.
v If the host variable is large enough to contain the data and the
NUL-terminator, then the following actions are taken:
– The data is returned, but the data is not padded with blanks
– The NUL-terminator immediately follows the data
v If the host variable is large enough to contain the data but not the
NUL-terminator, then the following actions are taken:
– The data is returned
– A NUL-terminator is not returned
– SQLWARN1 is set to ’N’
v If the host variable is not large enough to contain the data, the following
actions are taken:
– The data is truncated
– A NUL-terminator is not returned
– SQLWARN1 is set to ’W’
ÊÊ struct { Ê
auto const _Packed tag
extern volatile
static
Ê · variable-name ; ÊÍ
= { expression , expression }
Notes:
1. length must be an integer constant that is greater than 0 and not greater than
32740.
2. var-1 and var-2 must be simple variable references and cannot be used
individually as integer and character host variables.
3. The struct tag can be used to define other data areas, but these cannot be used
as host variables.
4. The VARCHAR structured form should be used for bit data that may contain the
NULL character. The VARCHAR structured form will not be ended using the
nul-terminator.
Example:
EXEC SQL BEGIN DECLARE SECTION;
struct VARCHAR {
short len;
char s[10];
} vstring;
ÊÊ wchar_t Ê
auto const
extern volatile
static
Ê · variable-name ; ÊÍ
= expression
ÊÊ wchar_t Ê
auto const
extern volatile
static
Ê · variable-name [ length ] ; ÊÍ
= expression
Notes:
1. length must be an integer constant that is greater than 1 and not greater than
16371.
2. If the *CNULRQD option is specified on the CRTSQLCI, CRTSQLCPPI, or
CVTSQLCPP command, then input host variables must contain the graphic
NUL-terminator (/0/0). Output host variables are padded with DBCS blanks, and
the last character is the graphic NUL-terminator. If the output host variable is too
small to contain both the data and the NUL-terminator, the following actions are
taken:
v The data is truncated
v The last character is the graphic NUL-terminator
v SQLWARN1 is set to ’W’
ÊÊ struct { Ê
auto const _Packed tag
extern volatile
static
Ê · variable-name ; ÊÍ
= { expression , expression }
Notes:
1. length must be an integer constant that is greater than 0 and not greater than
16370.
2. var-1 and var-2 must be simple variable references and cannot be used as host
variables.
3. The struct tag can be used to define other data areas, but these cannot be used
as host variables.
Example:
EXEC SQL BEGIN DECLARE SECTION;
struct VARGRAPH {
short len;
wchar_t s[10];
} vstring;
| Ê ( lob-length ) Ê
| K
M
| ,
|
Ê · variable-name ; ÊÍ
= { init-len,″init-data″ }
= SQL_BLOB_INIT(″init-data″)
= SQL_CLOB_INIT(″init-data″)
= SQL_DBCLOB_INIT(″init-data″)
||
|
| Notes:
| 1. The SQL TYPE IS clause is needed in order to distinguish the three LOB-types
| from each other so that type-checking and function resolution can be carried
| out for LOB-type host variables that are passed to functions.
| 2. For BLOB and CLOB, 1 <= lob-length <= 15,728,640
| 3. For DBCLOB, 1 <= lob-length <= 7,864,320
| 4. SQL TYPE IS, BLOB, CLOB, DBCLOB, K, M can be in mixed case.
| 5. The maximum length allowed for the initialization string is 32,766 bytes.
| 6. The initialization length, init-len, must be a numeric constant (that is, it cannot
| include K, M, or G).
| 7. A length for the LOB must be specified; that is, the following declaration is not
| permitted
| SQL TYPE IS BLOB my_blob;
| 8. If the LOB is not initialized within the declaration, then no initialization will be
| done within the precompiler generated code.
| 9. The precompiler generates a structure tag which can be used to cast to the
| host variables type.
| 10. LOB Host Variables can be declared in host structures and host structure
| arrays.
| 11. Pointers to LOB Host Variables can be declared, with the same rules and
| restrictions as for pointers to other host variable types.
| 12. CCSID processing for LOB Host Variables will be the same as the processing
| for other character and graphic host variable types.
| 13. If a DBCLOB is initialized, it is the user’s responsibility to prefix the string with
| an ’L’ (indicating a wide-character string).
| BLOB Example
| CLOB Example
| DBCLOB Example
| LOB Locators
|
| LOB Locator
| ,
|
Ê · variable-name ; ÊÍ
= init-value
||
|
| Notes:
| 1. SQL TYPE IS, BLOB_LOCATOR, CLOB_LOCATOR, DBCLOB_LOCATOR can
| be in mixed case.
| 2. init-value permits the initialization of pointer locator variables. Other types of
| initialization will have no meaning.
| 3. LOB Locators can be declared in host structures and host structure arrays.
| ,
|
Ê · variable-name ; ÊÍ
= init-value
||
|
| Notes:
| 1. SQL TYPE IS, BLOB_FILE, CLOB_FILE, DBCLOB_FILE can be in mixed case.
| 2. LOB File Reference Variables can be declared as part of a host structure.
| 3. Pointers to LOB File Reference Variables can be declared, with the same rules
| and restrictions as for pointers to other host variable types.
| The pre-compiler will generate declarations for the following file option constants:
| v SQL_FILE_READ (2)
| v SQL_FILE_CREATE (8)
A host structure name can be a group name whose subordinate levels name
elementary C or C++ variables. For example:
struct {
struct {
char c1;
char c2;
} b_st;
} a_st;
In this example, b_st is the name of a host structure consisting of the elementary
items c1 and c2.
You can use the structure name as a shorthand notation for a list of scalars, but
only for a two-level structure. You can qualify a host variable with a structure name
(for example, structure.field). Host structures are limited to two levels. (For example,
in the above host structure example, the a_st cannot be referred to in SQL.) A
structure cannot contain an intermediate level structure. In the previous example,
a_st could not be used as a host variable or referred to in an SQL statement. A host
structure for SQL data has two levels and can be thought of as a named set of host
variables. After the host structure is defined, you can refer to it in an SQL statement
instead of listing the several host variables (that is, the names of the host variables
that make up the host structure).
For example, you can retrieve all column values from selected rows of the table
CORPDATA.EMPLOYEE with:
struct { char empno[7];
struct { short int firstname_len;
char firstname_text[12];
} firstname;
char midint,
struct { short int lastname_len;
char lastname_text[15];
} lastname;
char workdept[4];
} pemp1;
.....
strcpy("000220",pemp1.empno);
.....
exec sql
select *
into :pemp1
from corpdata.employee
where empno=:pemp1.empno;
Notice that in the declaration of pemp1, two varying-length string elements are
included in the structure: firstname and lastname.
Host Structures
ÊÊ struct { Ê
auto const _Packed tag
extern volatile
static
Ê · float · var-1 ; } Ê
double
decimal ( precision )
, scale
long
signed short int
varchar-structure
vargraphic-structure
,
char · var-2 ;
signed [ length ]
unsigned
,
wchar_t · var-5 ;
[ length ]
Ê · variable-name ; ÊÍ
= expression
varchar-structure:
vargraphic-structure:
struct { short Ê
tag signed int
Note:
ÊÊ short Ê
auto const signed int
extern volatile
static
Ê · variable-name [ dimension ] ; ÊÍ
= expression
In this example,
struct {
_Packed struct{
char c1_var[20];
short c2_var;
} b_array[10];
} a_struct;
ÊÊ _Packed struct { Ê
auto const tag
extern volatile
static
Ê · float · var-1 ; } Ê
double
decimal ( precision )
, scale
long
signed short int
varchar-structure
vargraphic-structure
,
char · var-2 ;
signed [ length ]
unsigned
,
wchar_t · var-5 ;
[ length ]
Ê · variable-name [ dimension ] ; ÊÍ
= expression
varchar-structure:
vargraphic-structure:
Notes:
1. For details on declaring numeric, character, and graphic host variables, see the
notes under numeric-host variables, character-host, and graphic-host variables.
2. The struct tag can be used to define other data areas, but these cannot be used
as host variables.
ÊÊ struct { Ê
auto const _Packed tag
extern volatile
static
Ê · variable-name [ dimension-2 ] ; ÊÍ
= expression
Notes:
1. The struct tag can be used to define other data areas, but they cannot be used
as host variables.
2. dimension-1 and dimension-2 must both be integer constants between 1 and
32767.
v When a host variable is referenced within an SQL statement, that host variable
must be referenced exactly as declared, with the exception of pointers to
NUL-terminated character arrays. For example, the following declaration required
parentheses:
char (*mychara)[20]; /* ptr to char array of 20 bytes */
However, the parentheses are not allowed when the host variable is referenced
in an SQL statement, such as a SELECT:
EXEC SQL SELECT name INTO :*mychara FROM mytable;
v Only the asterisk can be used as an operator over a host variable name.
v The maximum length of a host variable name is affected by the number of
asterisks specified, as these asterisks are considered part of the name.
v Pointers to structures are not usable as host variables except for variable
character structures. Also, pointer fields in structures are not usable as host
variables.
v SQL requires that all specified storage for based host variables be allocated. If
the storage is not allocated, unpredictable results can occur.
Note: DATE, TIME, and TIMESTAMP columns generate character host variable
definitions. They are treated by SQL with the same comparison and
assignment rules as a DATE, TIME, and TIMESTAMP column. For example,
a date host variable can only compared against a DATE column or a
character string which is a valid representation of a date.
Although zoned, binary (with non-zero scale fields), and optionally decimal
are mapped to character fields in ILE C for AS/400, SQL will treat these
fields as numeric. By using the extended program model (EPM) routines, you
can manipulate these fields to convert zoned and packed decimal data. For
more information, see the ILE C for AS/400 Language Reference book.
You can use the following table to determine the C or C++ data type that is
equivalent to a given SQL data type.
Table 25. SQL Data Types Mapped to Typical C or C++ Declarations
SQL Data Type C or C++ Data Type Notes
SMALLINT short int
INTEGER long int
DECIMAL(p,s) decimal(p,s) p is a positive integer from 1
to 31, and s is a positive
integer from 0 to 31.
NUMERIC(p,s) or nonzero No exact equivalent Use decimal(p,s).
scale binary
FLOAT (single precision) float
FLOAT (double precision) double
CHAR(1) single-character form
CHAR(n) No exact equivalent If n>1, use NUL-terminated
character form
VARCHAR(n) NUL-terminated character If data can contain character
form NULs (\0), use VARCHAR
structured form. Allow at least
n+1 to accommodate the
NUL-terminator.
See DB2 UDB for AS/400 SQL Reference book for more information on the use of
indicator variables.
Indicator variables are declared in the same way as host variables. The
declarations of the two can be mixed in any way that seems appropriate to you.
Example:
A detailed sample COBOL program, showing how SQL statements can be used, is
provided in Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements.
Or,
v An SQLCA (which contains an SQLCODE and SQLSTATE variable).
The SQLCODE and SQLSTATE values are set by the database manager after each
SQL statement is executed. An application can check the SQLCODE or SQLSTATE
value to determine whether the last SQL statement was successful.
The SQLCA can be coded in a COBOL program either directly or by using the SQL
INCLUDE statement. Using the SQL INCLUDE statement requests the inclusion of
a standard declaration:
EXEC SQL INCLUDE SQLCA END-EXEC.
The SQLCODE, SQLSTATE, and SQLCA variable declarations must appear in the
WORKING-STORAGE SECTION or LINKAGE SECTION of your program and can
be placed wherever a record description entry can be specified in those sections.
When you use the INCLUDE statement, the SQL COBOL precompiler includes
COBOL source statements for the SQLCA:
01 SQLCA.
05 SQLCAID PIC X(8).
05 SQLCABC PIC S9(9) BINARY.
05 SQLCODE PIC S9(9) BINARY.
05 SQLERRM.
49 SQLERRML PIC S9(4) BINARY.
49 SQLERRMC PIC X(70).
05 SQLERRP PIC X(8).
05 SQLERRD OCCURS 6 TIMES
PIC S9(9) BINARY.
05 SQLWARN.
10 SQLWARN0 PIC X.
10 SQLWARN1 PIC X.
10 SQLWARN2 PIC X.
10 SQLWARN3 PIC X.
10 SQLWARN4 PIC X.
10 SQLWARN5 PIC X.
10 SQLWARN6 PIC X.
10 SQLWARN7 PIC X.
For ILE COBOL for AS/400, the SQLCA is declared using the GLOBAL clause.
SQLCODE is replaced with SQLCADE when a declare for SQLCODE is found in
the program and the SQLCA is provided by the precompiler. SQLSTATE is replaced
with SQLSTOTE when a declare for SQLSTATE is found in the program and the
SQLCA is provided by the precompiler.
Unlike the SQLCA, there can be more than one SQLDA in a program. The SQLDA
can have any valid name. An SQLDA can be coded in a COBOL program directly or
added with the INCLUDE statement. Using the SQL INCLUDE statement requests
the inclusion of a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA END-EXEC.
1 SQLDA.
05 SQLDAID PIC X(8).
05 SQLDABC PIC S9(9) BINARY.
05 SQLN PIC S9(4) BINARY.
05 SQLD PIC S9(4) BINARY.
05 SQLVAR OCCURS 0 TO 409 TIMES DEPENDING ON SQLD.
10 SQLTYPE PIC S9(4) BINARY.
10 SQLLEN PIC S9(4) BINARY.
10 FILLER REDEFINES SQLLEN.
15 SQLPRECISION PIC X.
15 SQLSCALE PIC X.
10 SQLRES PIC X(12).
10 SQLDATA POINTER.
10 SQLIND POINTER.
10 SQLNAME.
49 SQLNAMEL PIC S9(4) BINARY.
49 SQLNAMEC PIC X(30).
For more information, refer to the DB2 UDB for AS/400 SQL Reference book.
Each SQL statement in a COBOL program must begin with EXEC SQL and end
with END-EXEC. If the SQL statement appears between two COBOL statements,
the period is optional and might not be appropriate. The EXEC SQL keywords must
appear all on one line, but the remainder of the statement can appear on the next
and subsequent lines.
Example:
Comments
In addition to SQL comments (--), you can include COBOL comment lines (* or / in
column 7) within embedded SQL statements except between the keywords EXEC
and SQL. COBOL debugging lines (D in column 7) are treated as comment lines by
the precompiler.
Constants containing DBCS data can be continued across multiple lines by placing
the shift-in character in column 72 of the continued line and the shift-out after the
first string delimiter of the continuation line.
Including Code
SQL statements or COBOL host variable declaration statements can be included by
embedding the following SQL statement at the point in the source code where the
statements are to be embedded:
EXEC SQL INCLUDE member-name END-EXEC.
Margins
Code SQL statements in columns 12 through 72. If EXEC SQL starts before the
specified margin (that is, before column 12), the SQL precompiler will not recognize
the statement.
Sequence Numbers
The source statements generated by the SQL precompiler are generated with the
same sequence number as the SQL statement.
Names
Any valid COBOL variable name can be used for a host variable and is subject to
the following restrictions:
Do not use host variable names or external entry names that begin with 'SQL',
'RDI', or 'DSN'. These names are reserved for the database manager.
Statement Labels
Executable SQL statements in the PROCEDURE DIVISION can be preceded by a
paragraph name.
WHENEVER Statement
The target for the GOTO clause in an SQL WHENEVER statement must be a
section name or unqualified paragraph name in the PROCEDURE DIVISION.
The COBOL statements that are used to define the host variables should be
preceded by a BEGIN DECLARE SECTION statement and followed by an END
DECLARE SECTION statement. If a BEGIN DECLARE SECTION and END
DECLARE SECTION are specified, all host variable declarations used in SQL
statements must be between the BEGIN DECLARE SECTION and the END
DECLARE SECTION statements.
All host variables within an SQL statement must be preceded by a colon (:).
To accommodate using dashes within a COBOL host variable name, blanks must
precede and follow a minus sign.
Ê BINARY Ê
USAGE COMPUTATIONAL-4
IS COMP-4
Ê . ÊÍ
VALUE numeric-constant
IS
Notes:
1. BINARY, COMPUTATIONAL-4, and COMP-4 are equivalent. A portable
application should code BINARY, because COMPUTATIONAL-4 and COMP-4
are IBM extensions that are not supported in ISO/ANSI COBOL. The
picture-string associated with these types must have the form S9(i)V9(d) (or
S9...9V9...9, with i and d instances of 9). i + d must be less than or equal to 9.
2. level-1 indicates a COBOL level between 2 and 48.
The following figure shows the syntax for valid decimal host variable declarations.
DECIMAL
Ê PACKED-DECIMAL Ê
USAGE COMPUTATIONAL-3
IS COMP-3
COMPUTATIONAL
COMP
Ê . ÊÍ
VALUE numeric-constant
IS
Notes:
1. PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. A
portable application should code PACKED-DECIMAL, because
COMPUTATIONAL-3 and COMP-3 are IBM extensions that are not supported in
The following figure shows the syntax for valid numeric host variable declarations.
Numeric
Ê Ê
DISPLAY
USAGE display clause
IS
Ê . ÊÍ
VALUE numeric-constant
IS
display clause:
Notes:
1. The picture-string associated with SIGN LEADING SEPARATE and DISPLAY
must have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9). i + d
must be less than or equal to 18.
2. level-1 indicates a COBOL level between 2 and 48.
ÊÊ 01 variable-name COMPUTATIONAL-1 Ê
77 USAGE COMP-1
level-1 IS COMPUTATIONAL-2
COMP-2
Ê . ÊÍ
VALUE numeric-constant
IS
Notes:
1. COMPUTATIONAL-1 and COMP-1 are equivalent. COMPUTATIONAL-2 and
COMP-2 are equivalent.
2. level-1 indicates a COBOL level between 2 and 48.
Ê Ê
DISPLAY VALUE string-constant
USAGE IS
IS
Ê . ÊÍ
Notes:
1. The picture string associated with these forms must be X(m) (or XXX...X, with m
instance of X) with 1 ≤ m ≤ 32 766.
2. level-1 indicates a COBOL level between 2 and 48.
Ê picture-string-1 BINARY Ê
IS USAGE COMPUTATIONAL-4
IS COMP-4
Ê . 49 var-2 PICTURE Ê
VALUE numeric-constant PIC
IS
Ê picture-string-2 Ê
IS DISPLAY
USAGE
IS
Ê . ÊÍ
VALUE string-constant
IS
Notes:
1. The picture-string-1 associated with these forms must be S9(m) or S9...9 with m
instances of 9. m must be from 1 to 4.
Note that the database manager will use the full size of the S9(m) variable even
though COBOL on the AS/400 only recognizes values up to the specified
precision. This can cause data truncation errors when COBOL statements are
being run and may effectively limit the maximum length of variable-length
character strings to the specified precision.
2. The picture-string-2 associated with these forms must be either X(m), or XX...X,
with m instances of X, and with 1 ≤ m ≤ 32 740.
3. var-1 and var-2 cannot be used as host variables.
4. level-1 indicates a COBOL level between 2 and 48.
Ê Ê
DISPLAY-1 VALUE string-constant
USAGE IS
IS
Ê . ÊÍ
Notes:
1. The picture string associated with these forms must be G(m) (or GGG...G, with
m instance of G) or N(m) (or NNN...N, with m instance of N) with 1 ≤ m ≤ 16
383.
2. level-1 indicates a COBOL level between 2 and 48.
Ê picture-string-1 BINARY Ê
IS USAGE COMPUTATIONAL-4
IS COMP-4
Ê . 49 var-2 PICTURE Ê
VALUE numeric-constant PIC
IS
Ê picture-string-2 Ê
IS DISPLAY-1
USAGE
IS
Ê . ÊÍ
VALUE string-constant
IS
Notes:
1. The picture-string-1 associated with these forms must be S9(m) or S9...9 with m
instances of 9. m must be from 1 to 4.
| Ê ( lob-length ) . ÊÍ
| K
M
||
|
| Notes:
| 1. For BLOB and CLOB, 1 <= lob-length <= 15,728,640
| 2. For DBCLOB, 1 <= lob-length <= 7,864,320
| 3. SQL TYPE IS, BLOB, CLOB, DBCLOB can be in mixed case.
| 4. LOB Host Variables can be declared in host structures.
| BLOB Example
| CLOB Example
| LOB Locators
| LOB locators are only supported in ILE COBOL for AS/400.
|
| LOB Locator
| ÊÊ 01 variable-name Ê
| USAGE
IS
||
|
| Notes:
| 1. SQL TYPE IS, BLOB-LOCATOR, CLOB-LOCATOR, DBCLOB-LOCATOR can be
| in mixed case.
| 2. LOB Locators cannot be initialized in the SQL TYPE IS statement.
| 3. LOB Locators can be declared as a part of a host structure.
| BLOB Example
| Ê . ÊÍ
|
||
|
| Notes:
| 1. SQL TYPE IS, BLOB-FILE, CLOB-FILE, DBCLOB-FILE can be in mixed case.
| 2. LOB File Reference Variables can be declared as part of a host structure.
| BLOB Example
| The pre-compiler will generate declarations for the following file option constants:
| v SQL-FILE-READ (2)
| v SQL-FILE-CREATE (8)
| v SQL-FILE-OVERWRITE (16)
| v SQL-FILE-APPEND (32)
| Ê format-options ÊÍ
|
||
|
A host structure name can be a group name whose subordinate levels name basic
data items. For example:
01 A
02 B
03 C1 PICTURE ...
03 C2 PICTURE ...
In this example, B is the name of a host structure consisting of the basic items C1
and C2.
When writing an SQL statement using a qualified host variable name (for example,
to identify a field within a structure), use the name of the structure followed by a
period and the name of the field (that is, PL/I style). For example, specify B.C1
rather than C1 OF B or C1 IN B. However, PL/I style applies only to qualified
names within SQL statements; you cannot use this technique for writing qualified
names in COBOL statements.
A host structure is considered complete if any of the following items are found:
v A COBOL item that must begin in area A
v Any SQL statement (except SQL INCLUDE)
After the host structure is defined, you can refer to it in an SQL statement instead of
listing the several host variables (that is, the names of the data items that comprise
the host structure).
For example, you can retrieve all column values from selected rows of the table
CORPDATA.EMPLOYEE with:
01 PEMPL.
10 EMPNO PIC X(6).
10 FIRSTNME.
49 FIRSTNME-LEN PIC S9(4) USAGE BINARY.
49 FIRSTNME-TEXT PIC X(12).
10 MIDINIT PIC X(1).
10 LASTNAME.
49 LASTNAME-LEN PIC S9(4) USAGE BINARY.
49 LASTNAME-TEXT PIC X(15).
10 WORKDEPT PIC X(3).
...
MOVE "000220" TO EMPNO.
...
EXEC SQL
SELECT *
Notice that in the declaration of PEMPL, two varying-length string elements are
included in the structure: FIRSTNME and LASTNAME.
Host Structure
The following figure shows the syntax for the valid host structure.
ÊÊ level-1 variable-name . Ê
floating-point:
usage-clause:
display-clause:
varchar-string:
Ê
DISPLAY VALUE constant
USAGE IS
IS
vargraphic-string:
Ê
DISPLAY-1 VALUE constant
USAGE IS
IS
datetime:
Notes:
1. level-1 indicates a COBOL level between 1 and 47.
2. level-2 indicates a COBOL level between 2 and 48 where level-2 > level-1.
3. Graphic host variables and floating-point host variables are only supported for
ILE COBOL for AS/400.
4. For details on declaring numeric, character, and graphic host variables, see the
notes under numeric-host variables, character-host variables, and graphic-host
variables.
5. format-options indicates valid datetime options that are supported by the
COBOL compiler. See the ILE COBOL for AS/400 Reference, SC09-2539-01
book for details.
Ê . ÊÍ
VALUE constant
IS
Notes:
1. Dimension must be an integer between 1 and 32767.
2. level-1 must be an integer between 2 and 48.
3. BINARY, COMPUTATIONAL-4, and COMP-4 are equivalent. A portable
application should code BINARY, because COMPUTATIONAL-4 and COMP-4
are IBM extensions that are not supported in ISO/ANSI COBOL. The
picture-string associated with these types must have the form S9(i) (or S9...9,
with i instances of 9). i must be less than or equal to 4.
floating-point:
usage-clause:
display-clause:
varchar-string:
Ê
DISPLAY VALUE constant
USAGE IS
IS
vargraphic-string:
Ê
DISPLAY-1 VALUE constant
USAGE IS
IS
datetime:
Notes:
1. level-1 indicates a COBOL level between 2 and 47.
2. level-2 indicates a COBOL level between 3 and 48 where level-2 > level-1.
3. Graphic host variables and floating-point host variables are only supported for
ILE COBOL for AS/400.
4. For details on declaring numeric, character, and graphic host variables, see the
notes under numeric-host variables, character-host variables, and graphic-host
variables.
5. Dimension must be an integer constant between 1 and 32767.
6. format-options indicates valid datetime options that are supported by the
COBOL compiler. See the ILE COBOL for AS/400 Reference, SC09-2539-01
book for details.
Ê BINARY Ê
USAGE COMPUTATIONAL-4 VALUE constant
IS COMP-4 IS
Ê . ÊÍ
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. level-2 indicates a COBOL level between 3 and 48 where level-2 > level-1.
3. Dimension must be an integer constant between 1 and 32767.
4. BINARY, COMPUTATIONAL-4, and COMP-4 are equivalent. A portable
application should code BINARY, because COMPUTATIONAL-4 and COMP-4
are IBM extensions that are not supported in ISO/ANSI COBOL. The
picture-string associated with these types must have the form S9(i) (or S9...9,
with i instances of 9). i must be less than or equal to 4.
| Note: You cannot retrieve host variables from file definitions that have field names
| which are COBOL reserved words. You must place the COPY DDx-format
| statement within a COBOL host structure.
If the file contains fields that are generated as FILLER, the structure cannot be
used as a host structure array.
For device files, if INDARA was not specified and the file contains indicators, the
declaration cannot be used as a host structure array. The indicator area is included
in the generated structure and causes the storage for records to not be contiguous.
For example, the following shows how to use COPY–DDS to generate a host
structure array and fetch 10 rows into the host structure array:
01 DEPT.
04 DEPT-ARRAY OCCURS 10 TIMES.
COPY DDS-ALL-FORMATS OF DEPARTMENT.
:
Note: DATE, TIME, and TIMESTAMP columns will generate character host variable
definitions that are treated by SQL with the same comparison and
assignment rules as the DATE, TIME, or TIMESTAMP column. For example,
a date host variable can only be compared against a DATE column or a
character string which is a valid representation of a date.
Unpredictable results may occur when a structure contains levels defined below a
FILLER item.
The COBOL declarations for SMALLINT and INTEGER data types are expressed
as a number of decimal digits. The database manager uses the full size of the
integers and can place larger values in the host variable than would be allowed in
the specified number of digits in the COBOL declaration. However, this can cause
data truncation or size errors when COBOL statements are being run. Ensure that
the size of numbers in your application is within the declared number of digits.
See DB2 UDB for AS/400 SQL Reference for more information on the use of
indicator variables.
Indicator variables are declared in the same way as host variables, and the
declarations of the two can be mixed in any way that seems appropriate to the
programmer.
Example:
A detailed sample PL/I program, showing how SQL statements can be used, is
provided in Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements.
Or,
v An SQLCA (which contains an SQLCODE and SQLSTATE variable).
The SQLCODE and SQLSTATE values are set by the database manager after each
SQL statement is executed. An application can check the SQLCODE or SQLSTATE
value to determine whether the last SQL statement was successful.
The SQLCA can be coded in a PL/I program either directly or by using the SQL
INCLUDE statement. Using the SQL INCLUDE statement requests the inclusion of
a standard SQLCA declaration:
EXEC SQL INCLUDE SQLCA ;
The scope of the SQLCODE, SQLSTATE, and SQLCA variables must include the
scope of all SQL statements in the program.
For more information on SQLDA, see the DB2 UDB for AS/400 SQL Reference
book.
Each SQL statement in a PL/I program must begin with EXEC SQL and end with a
semicolon (;). The key words EXEC SQL must appear all on one line, but the
remainder of the statement can appear on the next and subsequent lines.
Example
An UPDATE statement coded in a PL/I program might be coded as follows:
EXEC SQL UPDATE DEPARTMENT
SET MGRNO = :MGR_NUM
WHERE DEPTNO = :INT_DEPT ;
Comments
In addition to SQL comments (--), you can include PL/I comments (/*...*/) in
embedded SQL statements wherever a blank is allowed, except between the
keywords EXEC and SQL.
Constants containing DBCS data can be continued across multiple lines by placing
the shift-in and shift-out characters outside of the margins. This example assumes
margins of 2 and 72. This SQL statement has a valid graphic constant of
G’<AABBCCDDEEFFGGHHIIJJKK>’.
*(..+....1....+....2....+....3....+....4....+....5....+....6....+....7.)..
EXEC SQL SELECT * FROM GRAPHTAB WHERE GRAPHCOL = G'<AABBCCDD>
<EEFFGGHHIIJJKK>';
Including Code
SQL statements or PL/I host variable declaration statements can be included by
placing the following SQL statement at the point in the source code where the
statements are to be embedded:
EXEC SQL INCLUDE member-name ;
Margins
Code SQL statements within the margins specified by the MARGINS parameter on
the CRTSQLPLI command. If EXEC SQL does not start within the specified
margins, the SQL precompiler will not recognize the SQL statement. For more
information about the CRTSQLPLI command, see Appendix D. DB2 UDB for
AS/400 CL Command Descriptions.
Do not use host variable names or external entry names that begin with 'SQL',
'RDI', or 'DSN'. These names are reserved for the database manager.
Statement Labels
All executable SQL statements, like PL/I statements, can have a label prefix.
WHENEVER Statement
The target for the GOTO clause in an SQL WHENEVER statement must be a label
in the PL/I source code and must be within the scope of any SQL statements
affected by the WHENEVER statement.
The PL/I statements that are used to define the host variables should be preceded
by a BEGIN DECLARE SECTION statement and followed by an END DECLARE
SECTION statement. If a BEGIN DECLARE SECTION and END DECLARE
SECTION are specified, all host variable declarations used in SQL statements must
be between the BEGIN DECLARE SECTION and the END DECLARE SECTION
statements.
All host variables within an SQL statement must be preceded by a colon (:).
The names of host variables must be unique within the program, even if the host
variables are in different blocks or procedures.
An SQL statement that uses a host variable must be within the scope of the
statement in which the variable was declared.
Only the names and data attributes of the variables are used by the precompilers;
the alignment, scope, and storage attributes are ignored. Even though alignment,
scope, and storage are ignored, there are some restrictions on their use that, if
ignored, may result in problems when compiling PL/I source code that is created by
the precompiler. These restrictions are:
v A declaration with the EXTERNAL scope attribute and the STATIC storage
attribute must also have the INITIAL storage attribute.
v If the BASED storage attribute is coded, it must be followed by a PL/I
element-locator-expression.
Numeric
ÊÊ DECLARE variable-name Ê
DCL ,
( · variable-name )
Ê BINARY FIXED Ê
BIN ( precision )
FLOAT
( precision )
DECIMAL FIXED
DEC ( precision )
,scale
FLOAT
( precision )
PICTURE picture-string
Ê ; ÊÍ
Alignment and/or Scope and/or Storage
Notes:
1. (BINARY, BIN, DECIMAL, or DEC) and (FIXED or FLOAT) and (precision, scale)
can be specified in any order.
2. A picture-string in the form ’9...9V9...R’ indicates a numeric host variable. The R
is required. The optional V indicates the implied decimal point.
3. A picture-string in the form ’S9...9V9...9’ indicates a sign leading separate host
variable. The S is required. The optional V indicates the implied decimal point.
Character-Host Variables
The following figure shows the syntax for valid scalar character-host variables.
( · variable-name )
Ê Ê
( length ) VARYING
VAR
Ê ; ÊÍ
Alignment and/or Scope and/or Storage
Notes:
1. Length must be an integer constant not greater than 32766 if VARYING or VAR
is not specified.
2. If VARYING or VAR is specified, length must be a constant no greater than
32740.
| LOB
( · variable-name )
| Ê ( lob-length ) Ê
| K
| Ê ; ÊÍ
| Alignment and/or Scope and/or Storage
||
|
| Notes:
| 1. The SQL TYPE IS clause is needed in order to distinguish the three LOB-types
| from each other so that type-checking and function resolution can be carried out
| for LOB-type host variables that are passed to functions.
| 2. For BLOB and CLOB, 1 <= lob-length <= 32,766
| 3. SQL TYPE IS, BLOB, CLOB can be in mixed case.
| 4. LOB Host Variables can be declared in host structures.
| CLOB Example:
| LOB Locators
| The following figure shows the syntax for valid LOB locators.
|
| LOB locator
| ÊÊ DECLARE variable-name Ê
| DCL ,
( · variable-name )
| Ê ; ÊÍ
| Alignment and/or Scope and/or Storage
||
|
| Notes:
| 1. SQL TYPE IS, BLOB_LOCATOR, CLOB_LOCATOR, DBCLOB_LOCATOR can
| be in mixed case.
| 2. LOB Locators can be declared as part of a host structure.
| CLOB Example:
| ÊÊ DECLARE variable-name Ê
| DCL ,
( · variable-name )
| Ê ; ÊÍ
| Alignment and/or Scope and/or Storage
||
|
| Notes:
| 1. SQL TYPE IS, BLOB_LOCATOR, CLOB_LOCATOR, DBCLOB_LOCATOR can
| be in mixed case.
| 2. LOB File Reference Variables can be declared as part of a host structure.
| CLOB Example:
| The pre-compiler will generate declarations for the following file option constants:
| v SQL_FILE_READ (2)
| v SQL_FILE_CREATE (8)
| v SQL_FILE_OVERWRITE (16)
| v SQL_FILE_APPEND (32)
|
Using Host Structures
In PL/I programs, you can define a host structure, which is a named set of
elementary PL/I variables. A host structure name can be a group name whose
subordinate levels name elementary PL/I variables. For example:
You can use the structure name as shorthand notation for a list of scalars. You can
qualify a host variable with a structure name (for example, STRUCTURE.FIELD).
Host structures are limited to two levels. (For example, in the above host structure
example, the A cannot be referred to in SQL.) A structure cannot contain an
intermediate level structure. In the previous example, A could not be used as a host
variable or referred to in an SQL statement. However, B is the first level structure. B
can be referred to in an SQL statement. A host structure for SQL data is two levels
deep and can be thought of as a named set of host variables. After the host
structure is defined, you can refer to it in an SQL statement instead of listing the
several host variables (that is, the names of the host variables that make up the
host structure).
For example, you can retrieve all column values from selected rows of the table
CORPDATA.EMPLOYEE with:
DCL 1 PEMPL,
5 EMPNO CHAR(6),
5 FIRSTNME CHAR(12) VAR,
5 MIDINIT CHAR(1),
5 LASTNAME CHAR(15) VAR,
5 WORKDEPT CHAR(3);
...
EMPID = '000220';
...
EXEC SQL
SELECT *
INTO :PEMPL
FROM CORPDATA.EMPLOYEE
WHERE EMPNO = :EMPID;
Host Structures
The following figure shows the syntax for valid host structure declarations.
ÊÊ DECLARE 1 variable-name , Ê
DCL Scope and/or storage
level-1 variable-name ,
( · var-2 )
data-types:
BINARY FIXED
BIN FLOAT ( precision ) UNALIGNED
DECIMAL FIXED
DEC ( precision )
, scale
FLOAT
( precision ) UNALIGNED
PICTURE picture-string
CHARACTER
CHAR ( length ) VARYING
VAR ALIGNED
Notes:
1. Level-1 indicates that there is an intermediate level structure.
2. Level-1 must be an integer constant between 1 and 254.
3. Level-2 must be an integer constant between 2 and 255.
4. For details on declaring numeric and character host variables, see the notes
under numeric-host variables and character-host variables.
( · variable-name ( dimension ) )
Ê BINARY FIXED Ê
BIN ( precision )
Ê ; ÊÍ
Alignment and/or scope and/or storage
DCL 1 DEPT(10),
5 DEPTPNO CHAR(3),
5 DEPTNAME CHAR(29) VAR,
5 MGRNO CHAR(6),
5 ADMRDEPT CHAR (3);
DCL 1 IND_ARRAY(10),
5 INDS(4) FIXED BIN(15);
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT *
FROM CORPDATA.DEPARTMENT;
EXEC SQL
FETCH C1 FOR 10 ROWS INTO :DEPT :IND_ARRAY;
( · var-2 )
data-types:
Notes:
1. Level-1 indicates that there is an intermediate level structure.
2. Level-1 must be an integer constant between 1 and 254.
3. Level-2 must be an integer constant between 2 and 255.
4. For details on declaring numeric and character host variables, see the notes
under numeric-host variables and character-host variables.
5. Dimension must be an integer constant between 1 and 32767.
Notes:
1. Level-1 indicates that there is an intermediate level structure.
2. Level-1 must be an integer constant between 1 and 254.
3. Level-2 must be an integer constant between 2 and 255.
4. Dimension-1 and dimension-2 must be integer constants between 1 and 32767.
The structure is ended normally by the last data element of the record or key
structure. However, if in the %INCLUDE directive the COMMA element is specified,
then the structure is not ended. For more information about the %INCLUDE
directive, see the PL/I Reference Summary book, SX09-1290, and the PL/I User’s
Guide and Reference book, SC09-1825.
For device files, if INDARA was not specified and the file contains indicators, the
declaration cannot be used as a host structure array. The indicator area is included
in the generated structure and causes the storage to not be contiguous.
DCL 1 DEPT_REC(10),
%INCLUDE DEPARTMENT(DEPARTMENT,RECORD);
:
Note: DATE, TIME, and TIMESTAMP columns will generate host variable
definitions that are treated by SQL with the same comparison and
assignment rules as a DATE, TIME, and TIMESTAMP column. For example,
a date host variable can only be compared with a DATE column or a
character string that is a valid representation of a date.
Although decimal and zoned fields with precision greater than 15 and binary with
nonzero scale fields are mapped to character field variables in PL/I, SQL considers
these fields to be numeric.
See DB2 UDB for AS/400 SQL Reference book for more information about using
indicator variables.
Indicator variables are declared in the same way as host variables and the
declarations of the two can be mixed in any way that seems appropriate to the
programmer.
Example:
This chapter describes the unique application and coding requirements for
embedding SQL statements in a RPG for AS/400 program. Requirements for host
variables are defined.
A detailed sample RPG for AS/400 program, showing how SQL statements can be
used, is provided in Appendix C. Sample Programs Using DB2 UDB for AS/400
Statements.
Note: Variable names in RPG for AS/400 are limited to 6 characters. The standard
SQLCA names have been changed to a length of 6. RPG for AS/400 does
not have a way of defining arrays in a data structure without also defining
them in the extension specification. SQLERR is defined as character with
SQLER1 through 6 used as the names of the elements.
Unlike the SQLCA, there can be more than one SQLDA in a program and an
SQLDA can have any valid name.
Because the SQLDA uses pointer variables which are not supported by RPG for
AS/400, an INCLUDE SQLDA statement cannot be specified in an RPG for AS/400
program. An SQLDA must be set up by a C, COBOL, PL/I, or ILE RPG program
and passed to the RPG program in order to use it.
The keywords EXEC SQL indicate the beginning of an SQL statement. EXEC SQL
must occupy positions 8 through 16 of the source statement, preceded by a / in
position 7. The SQL statement may start in position 17 and continue through
position 74.
The keyword END-EXEC ends the SQL statement. END-EXEC must occupy
positions 8 through 16 of the source statement, preceded by a slash (/) in position
7. Positions 17 through 74 must be blank.
Example
An UPDATE statement coded in an RPG for AS/400 program might be coded as
follows:
Comments
In addition to SQL comments (--), RPG for AS/400 comments can be included
within SQL statements wherever a blank is allowed, except between the keywords
EXEC and SQL. To embed an RPG for AS/400 comment within the SQL statement,
place an asterisk (*) in position 7.
Constants containing DBCS data can be continued across multiple lines by placing
the shift-in character in position 75 of the continued line and placing the shift-out
character in position 8 of the continuation line. This SQL statement has a valid
graphic constant of G’<AABBCCDDEEFFGGHHIIJJKK>’.
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
C/EXEC SQL SELECT * FROM GRAPHTAB WHERE GRAPHCOL = G'<AABB>
C+<CCDDEEFFGGHHIIJJKK>'
C/END-EXEC
Including Code
SQL statements and RPG for AS/400 calculation specifications can be included by
embedding the SQL statement:
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
C/EXEC SQL INCLUDE member-name
C/END-EXEC
The /COPY statement can be used to include SQL statements or RPG for AS/400
specifications.
Sequence Numbers
The sequence numbers of the source statements generated by the SQL
precompiler are based on the *NOSEQSRC/*SEQSRC keywords of the OPTION
parameter on the CRTSQLRPG command. When *NOSEQSRC is specified, the
sequence number from the input source member is used. For *SEQSRC, the
sequence numbers start at 000001 and are incremented by 1.
Names
Any valid RPG variable name can be used for a host variable and is subject to the
following restrictions:
Do not use host variable names or external entry names that begin with 'SQ', 'SQL',
'RDI', or 'DSN'. These names are reserved for the database manager.
Chapter 15. Coding SQL Statements in RPG for AS/400 Applications 299
Statement Labels
A TAG statement can precede any SQL statement. Code the TAG statement on the
line preceding EXEC SQL.
WHENEVER Statement
The target for the GOTO clause must be the label of the TAG statement. The scope
rules for the GOTO/TAG must be observed.
SQL embedded in RPG for AS/400 does not use the SQL BEGIN DECLARE
SECTION and END DECLARE SECTION statements to identify host variables. Do
not put these statements in the source program.
All host variables within an SQL statement must be preceded by a colon (:).
All variables defined in RPG for AS/400 can be used in SQL statements, except for
the following:
Indicator field names (*INxx)
Tables
UDATE
UDAY
UMONTH
UYEAR
Look-ahead fields
Named constants
Fields used as host variables are passed to SQL, using the CALL/PARM functions
of RPG for AS/400. If a field cannot be used in the result field of the PARM, it
cannot be used as a host variable.
When subfields are not present for the data structure, then the data structure name
is a host variable of character type. This allows character variables larger than 256,
because data structures can be up to 9999.
In the next example, PEMPL is the name of the host structure consisting of the
subfields EMPNO, FIRSTN, MIDINT, LASTNAME, and DEPTNO. The referral to
PEMPL uses the subfields. For example, the first column of EMPLOYEE is placed
in EMPNO, the second column is placed in FIRSTN, and so on.
*...1....+....2....+....3....+....4....+....5....+....6....+....7. ..*
IPEMPL DS
I 01 06 EMPNO
I 07 18 FIRSTN
I 19 19 MIDINT
I 20 34 LASTNA
I 35 37 DEPTNO
...
C MOVE '000220' EMPNO
...
C/EXEC SQL
C+ SELECT * INTO :PEMPL
C+ FROM CORPDATA.EMPLOYEE
C+ WHERE EMPNO = :EMPNO
C/END-EXEC
When writing an SQL statement, referrals to subfields can be qualified. Use the
name of the data structure, followed by a period and the name of the subfield. For
example, PEMPL.MIDINT is the same as specifying only MIDINT.
The following example uses a host structure array called DEPT and a multiple-row
FETCH statement to retrieve 10 rows from the DEPARTMENT table.
*...1....+....2....+....3....+....4....+....5....+....6....+....7...*
E INDS 4 4 0
IDEPT DS 10
I 01 03 DEPTNO
I 04 32 DEPTNM
I 33 38 MGRNO
I 39 41 ADMRD
IINDARR DS 10
Chapter 15. Coding SQL Statements in RPG for AS/400 Applications 301
I B 1 80INDS
...
C/EXEC SQL
C+ DECLARE C1 CURSOR FOR
C+ SELECT *
C+ FROM CORPDATA.DEPARTMENT
C/END-EXEC
C/EXEC SQL
C+ OPEN C1
C/END-EXEC
C/EXEC SQL
C+ FETCH C1 FOR 10 ROWS INTO :DEPT:INDARR
C/END-EXEC
Note: Code an F-spec for a file in your RPG program only if you use RPG for
AS/400 statements to do I/O operations to the file. If you use only SQL
statements to do I/O operations to the file, you can include the external
definition by using an external data structure.
In the following example, the sample table is specified as an external data structure.
The SQL precompiler retrieves the field (column) definitions as subfields of the data
structure. Subfield names can be used as host variable names, and the data
structure name TDEPT can be used as a host structure name. The field names
must be changed because they are greater than six characters.
*...1....+....2....+....3....+....4....+....5....+....6....+....7....*
ITDEPT E DSDEPARTMENT
I DEPTNAME DEPTN
I ADMRDEPT ADMRD
Note: DATE, TIME, and TIMESTAMP columns will generate host variable
definitions which are treated by SQL with the same comparison and
assignment rules as a DATE, TIME, and TIMESTAMP column. For example,
a date host variable can only be compared against a DATE column or a
character string which is a valid representation of a date.
In the following example, the DEPARTMENT table is included in the RPG for
AS/400 program and is used to declare a host structure array. A multiple-row
FETCH statement is then used to retrieve 10 rows into the host structure array.
*...1....+....2....+....3....+....4....+....5....+....6....*
ITDEPT E DSDEPARTMENT 10
I DEPARTMENT DEPTN
I ADMRDEPT ADMRD
...
C/EXEC SQL
C+ DECLARE C1 CURSOR FOR
C+ SELECT *
C+ FROM CORPDATA.DEPARTMENT
C/END-EXEC
...
C/EXEC SQL
C+ FETCH C1 FOR 10 ROWS INTO :TDEPT
C/END-EXEC
Chapter 15. Coding SQL Statements in RPG for AS/400 Applications 303
Table 30. RPG for AS/400 Declarations Mapped to Typical SQL Data Types (continued)
RPG for
AS/400 Data Other RPG for SQLTYPE of SQLLEN of SQL Data
Type Col 43 Col 52 AS/400 Coding Host Variable Host Variable Type
Data Structure B 0 Length = 4 496 4 INTEGER
subfield
Data Structure B 1-4 Length = 2 500 2 DECIMAL(4,s)
subfield where
s=column 52
Data Structure B 1-9 Length = 4 496 4 DECIMAL(9,s)
subfield where
s=column 52
Data Structure P 0 to 9 Length = n where n is 484 p in byte 1, s in DECIMAL(p,s)
subfield 1 to 16 byte 2 where p =
n*2-1 and s =
column 52
Input field P 0 to 9 Length = n where n is 484 p in byte 1, s in DECIMAL(p,s)
1 to 16 byte 2 where p =
n*2-1 and s =
column 52
Input field blank 0 to 9 Length = n where n is 484 p in byte 1, s in DECIMAL(p,s)
1 to 30 byte 2 where p = n
and s = column
52
Input field B 0 to 4 if n Length = 2 or 4 484 p in byte 1, s in DECIMAL(p,s)
= 2; 0 to 9 byte 2 where p=4 if
if n = 4 n=2 or 9 if n=4
and s = column
52
Calculation n/a 0 to 9 Length = n where n is 484 p in byte 1, s in DECIMAL(p,s)
result field 1 to 30 byte 2 where p = n
and s = column
52
Data Structure blank 0 to 9 Length = n where n is 488 p in byte 1, s in NUMERIC(p,s)
subfield 1 to 30 byte 2 where p = n
and s = column
52
The following table can be used to determine the RPG for AS/400 data type that is
equivalent to a given SQL data type.
Table 31. SQL Data Types Mapped to Typical RPG for AS/400 Declarations
SQL Data Type RPG for AS/400 Data Type Notes
SMALLINT Subfield of a data structure. B in position 43,
length must be 2 and 0 in position 52 of the
subfield specification.
INTEGER Subfield of a data structure. B in position 43,
length must be 4 and 0 in position 52 of the
subfield specification.
OR
OR
OR
Chapter 15. Coding SQL Statements in RPG for AS/400 Applications 305
Notes on RPG for AS/400 Variable Declaration and Usage
Assignment rules
RPG for AS/400 associates precision and scale with all numeric types. RPG for
AS/400 defines numeric operations, assuming the data is in packed format. This
means that operations involving binary variables include an implicit conversion to
packed format before the operation is performed (and back to binary, if necessary).
Data is aligned to the implied decimal point when SQL operations are performed.
See the DB2 UDB for AS/400 SQL Referencefor more information on the use of
indicator variables.
Indicator variables are declared in the same way as host variables and the
declarations of the two can be mixed in any way that seems appropriate to the
programmer.
Example
Given the statement:
*...1....+....2....+....3....+....4....+....5....+....6....+....7...*
C/EXEC SQL FETCH CLS_CURSOR INTO :CLSCD,
C+ :DAY :DAYIND,
C+ :BGN :BGNIND,
C+ :END :ENDIND
C/END-EXEC
For more information on the structure parameter passing technique, see “Improving
Performance by Using Structure Parameter Passing Techniques” on page 472.
If an RPG for AS/400 program containing SQL is called from another program which
also contains SQL, the RPG for AS/400 program should not set the Last Record
(LR) indicator on. Setting the LR indicator on causes the static storage to be
re-initialized the next time the RPG for AS/400 program is run. Re-initializing the
static storage causes the internal SQLDAs to be rebuilt, thus causing a
performance degradation.
An RPG for AS/400 program containing SQL statements that is called by a program
that also contains SQL statements, should be ended one of two ways:
v By the RETRN statement
v By setting the RT indicator on.
This allows the internal SQLDAs to be used again and reduces the total run time.
Chapter 15. Coding SQL Statements in RPG for AS/400 Applications 307
308 DB2 UDB for AS/400 SQL Programming V4R4
Chapter 16. Coding SQL Statements in ILE RPG for AS/400
Applications
This chapter describes the unique application and coding requirements for
embedding SQL statements in an ILE RPG for AS/400 program. The coding
requirements for host variables are defined.
Note: Variable names in RPG for AS/400 are limited to 6 characters. The standard
SQLCA names were changed to a length of 6 for RPG for AS/400. To
maintain compatibility with RPG for AS/400 programs which are converted to
ILE RPG for AS/400, the names for the SQLCA will remain as used with
RPG for AS/400. The SQLCA defined for the ILE RPG for AS/400 has added
the field SQLERRD which is defined as an array of six integers. SQLERRD
is defined to overlay the SQLERR definition.
Unlike the SQLCA, there can be more than one SQLDA in a program and an
SQLDA can have any valid name.
The user is responsible for the definition of SQL_NUM. SQL_NUM must be defined
as a numeric constant with the dimension required for SQL_VAR.
To set the field descriptions of the SQLDA the program sets up the field description
in the subfields of SQLVAR and then does a MOVEA of SQLVAR to SQL_VAR,n
where n is the number of the field in the SQLDA. This is repeated until all the field
descriptions are set.
When the SQLDA field descriptions are to be referenced the user does a MOVEA of
SQL_VAR,n to SQLVAR where n is the number of the field description to be
processed.
The keywords EXEC SQL indicate the beginning of an SQL statement. EXEC SQL
must occupy positions 8 through 16 of the source statement, preceded by a / in
position 7. The SQL statement may start in position 17 and continue through
position 80.
The keyword END-EXEC ends the SQL statement. END-EXEC must occupy
positions 8 through 16 of the source statement, preceded by a slash (/) in position
7. Positions 17 through 80 must be blank.
Example
An UPDATE statement coded in an ILE RPG for AS/400 program might be coded
as follows:
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8.
C/EXEC SQL UPDATE DEPARTMENT
C+ SET MANAGER = :MGRNUM
C+ WHERE DEPTNO = :INTDEP
C/END-EXEC
Comments
In addition to SQL comments (--), ILE RPG for AS/400 comments can be included
within SQL statements wherever SQL allows a blank character. To embed an ILE
RPG for AS/400 comment within the SQL statement, place an asterisk (*) in position
7.
Chapter 16. Coding SQL Statements in ILE RPG for AS/400 Applications 311
Constants containing DBCS data can be continued across multiple lines by placing
the shift-in character in position 81 of the continued line and placing the shift-out
character in position 8 of the continuation line.
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8.
C/EXEC SQL SELECT * FROM GRAPHTAB WHERE GRAPHCOL = G'<AABBCCDDEE>
C+<FFGGHHIIJJKK>'
C/END-EXEC
Including Code
SQL statements and RPG calculation specifications can be included by using the
SQL statement:
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
C/EXEC SQL INCLUDE member-name
C/END-EXEC
The RPG /COPY statement can be used to include SQL statements or RPG
specifications.
Sequence Numbers
The sequence numbers of the source statements generated by the SQL
precompiler are based on the *NOSEQSRC/*SEQSRC keywords of the OPTION
parameter on the CRTSQLRPGI command. When *NOSEQSRC is specified, the
sequence number from the input source member is used. For *SEQSRC, the
sequence numbers start at 000001 and are incremented by 1.
Names
Any valid ILE RPG for AS/400 variable name can be used for a host variable and is
subject to the following restrictions:
Do not use host variable names or external entry names that begin with the
characters 'SQ', 'SQL', 'RDI', or 'DSN'. These names are reserved for the database
manager. The length of host variable names is limited to 64.
Statement Labels
A TAG statement can precede any SQL statement. Code the TAG statement on the
line preceding EXEC SQL.
WHENEVER Statement
The target for the GOTO clause must be the label of the TAG statement. The scope
rules for the GOTO/TAG must be observed.
All host variables within an SQL statement must be preceded by a colon (:).
The names of host variables must be unique within the program, even if the host
variables are in different procedures.
An SQL statement that uses a host variable must be within the scope of the
statement in which the variable was declared.
All variables defined in ILE RPG for AS/400 can be used in SQL statements, except
for the following:
Pointer
Tables
UDATE
UDAY
UMONTH
UYEAR
Look-ahead fields
Named constants
Multiple dimension arrays
Definitions requiring the resolution of *SIZE or *ELEM
Definitions requiring the resolution of constants unless the constant is used in
OCCURS or DIM.
Fields used as host variables are passed to SQL, using the CALL/PARM functions
of ILE RPG for AS/400. If a field cannot be used in the result field of the PARM, it
cannot be used as a host variable.
Date and time host variables are always assigned to corresponding date and time
subfields in the structures generated by the SQL precompiler. The generated date
and time subfields are declared using the format and separator specified by the
DATFMT, DATSEP, TIMFMT, and TIMSEP parameters on the CRTSQLRPGI
command. Conversion from the user declared host variable format to the
precompile specified format occurs on assignment to and from the SQL generated
structure. If the DATFMT parameter value is a system format (*MDY, *YMD, *DMY,
or *JUL), then all input and output host variables must contain date values within
the range 1940-2039. If any date value is outside of this range, then the DATFMT
on the precompile must be specified as one of the IBM SQL formats of *ISO, *USA,
*EUR, or *JIS.
Chapter 16. Coding SQL Statements in ILE RPG for AS/400 Applications 313
Using Host Structures
The ILE RPG for AS/400 data structure name can be used as a host structure
name if subfields exist in the data structure. The use of the data structure name in
an SQL statement implies the list of subfield names making up the data structure.
When subfields are not present for the data structure, then the data structure name
is a host variable of character type. This allows character variables larger than 256.
While this support does not provide additional function since a field can be defined
with a maximum length of 32766 it is required for compatibility with RPG for AS/400
programs.
In the following example, BIGCHR is an ILE RPG for AS/400 data structure without
subfields. SQL treats any referrals to BIGCHR as a character string with a length of
642.
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
DBIGCHR DS 642
In the next example, PEMPL is the name of the host structure consisting of the
subfields EMPNO, FIRSTN, MIDINT, LASTNAME, and DEPTNO. The referral to
PEMPL uses the subfields. For example, the first column of
CORPDATA.EMPLOYEE is placed in EMPNO, the second column is placed in
FIRSTN, and so on.
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
DPEMPL DS
D EMPNO 01 06A
D FIRSTN 07 18A
D MIDINT 19 19A
D LASTNA 20 34A
D DEPTNO 35 37A
...
C MOVE '000220' EMPNO
...
C/EXEC SQL
C+ SELECT * INTO :PEMPL
C+ FROM CORPDATA.EMPLOYEE
C+ WHERE EMPNO = :EMPNO
C/END-EXEC
When writing an SQL statement, referrals to subfields can be qualified. Use the
name of the data structure, followed by a period and the name of the subfield. For
example, PEMPL.MIDINT is the same as specifying only MIDINT.
For all statements, other than the blocked FETCH and blocked INSERT, if an
occurrence data structure is used, the current occurrence is used. For the blocked
FETCH and blocked INSERT, the occurrence is set to 1.
The following example uses a host structure array called DEPT and a blocked
FETCH statement to retrieve 10 rows from the DEPARTMENT table.
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
DDEPARTMENT DS OCCURS(10)
D DEPTNO 01 03A
D DEPTNM 04 32A
D MGRNO 33 38A
D ADMRD 39 41A
DIND_ARRAY DS OCCURS(10)
D INDS 4B 0 DIM(4)
...
C/EXEC SQL
C+ DECLARE C1 FOR
C+ SELECT *
C+ FROM CORPDATA.DEPARTMENT
C/END-EXEC
...
C/EXEC SQL
C+ FETCH C1 FOR 10 ROWS
C+ INTO :DEPARTMENT:IND_ARRAY
C/END-EXEC
| CLOB Example
Chapter 16. Coding SQL Statements in ILE RPG for AS/400 Applications 315
| DBCLOB Example
| LOB Locators
| BLOB Example
| The pre-compiler will generate declarations for the following file option constants:
| v SQFRD (2)
| v SQFCRT (8)
| v SQFOVR (16)
| v SQFAPP (32)
|
Using External File Descriptions
The SQL precompiler processes the ILE RPG for AS/400 source in much the same
manner as the ILE RPG for AS/400 compiler. This means that the precompiler
processes the /COPY statement for definitions of host variables. Field definitions for
externally described files are obtained and renamed, if different names are
specified. The external definition form of the data structure can be used to obtain a
copy of the column names to be used as host variables.
Note: Code an F-spec for a file in your ILE RPG for AS/400 program only if you
use ILE RPG for AS/400 statements to do I/O operations to the file. If you
use only SQL statements to do I/O operations to the file, you can include the
external definition of the file (table) by using an external data structure.
In the following example, the sample table is specified as an external data structure.
The SQL precompiler retrieves the field (column) definitions as subfields of the data
structure. Subfield names can be used as host variable names, and the data
structure name TDEPT can be used as a host structure name. The example shows
that the field names can be renamed if required by the program.
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
DTDEPT E DS EXTNAME(DEPARTMENT)
D DEPTN E EXTFLD(DEPTNAME)
D ADMRD E EXTFLD(ADMRDEPT)
If the GRAPHIC or VARGRAPHIC column has a UCS-2 CCSID, the generated host
variable will have the UCS-2 CCSID assigned to it.
If OPTION(*NOCVTDT) is specified and the date and time format and separator of
date and time field definitions within the file are not the same as the DATFMT,
DATSEP, TIMFMT, and TIMSEP parameters on the CRTSQLRPGI command, then
the host structure array is not usable.
In the following example, the DEPARTMENT table is included in the ILE RPG for
AS/400 program and used to declare a host structure array. A blocked FETCH
statement is then used to retrieve 10 rows into the host structure array.
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
DDEPARTMENT E DS OCCURS(10)
...
C/EXEC SQL
C+ DECLARE C1 CURSOR FOR
C+ SELECT *
C+ FROM CORPDATA.DEPARTMENT
Chapter 16. Coding SQL Statements in ILE RPG for AS/400 Applications 317
C/END-EXEC
...
C/EXEC SQL
C+ FETCH C1 FOR 10 ROWS
C+ INTO :DEPARTMENT
C/END-EXEC
Chapter 16. Coding SQL Statements in ILE RPG for AS/400 Applications 319
Table 32. ILE RPG for AS/400 Declarations Mapped to Typical SQL Data Types (continued)
SQLTYPE
RPG Data D spec Pos D spec Pos of Host SQLLEN of
Type 40 41,42 Other RPG Coding Variable Host Variable SQL Data Type
Input field (pos n/a n/a Length = n where n is 392 n TIMESTAMP
36 = Z) 26 (pos 37-46)
Notes:
1. In the first column the term ″definition specification″ includes data structure
subfields unless explicitly stated otherwise.
2. In definition specifications the length of binary fields (B in pos 40) is determined
by the following:
v FROM (pos 26-32) is not blank, then length = TO-FROM+1.
v FROM (pos 26-32) is blank, then length = 2 if pos 33-39 < 5, or length = 4 if
pos 33-39 > 4.
3. SQL will create the date/time subfield using the DATE/TIME format specified on
the CRTSQLRPGI command. The conversion to the host variable DATE/TIME
format will occur when the mapping is done between the host variables and the
SQL generated subfields.
The following table can be used to determine the RPG data type that is equivalent
to a given SQL data type.
Table 33. SQL Data Types Mapped to Typical RPG Declarations
SQL Data Type RPG Data Type Notes
SMALLINT Definition specification. I in position
40, length must be 5 and 0 in position
42.
OR
Chapter 16. Coding SQL Statements in ILE RPG for AS/400 Applications 321
Table 33. SQL Data Types Mapped to Typical RPG Declarations (continued)
SQL Data Type RPG Data Type Notes
TIMESTAMP A character field Length must be at least 19; to include
OR microseconds, length must be at least
26. If length is less than 26, truncation
Definition specification with a Z in occurs on the microsecond part.
position 40.
OR
An indicator array can be defined by declaring the variable element length of 4,0
and specifying the DIM on the definition specification.
On retrieval, an indicator variable is used to show if its associated host variable has
been assigned a null value. On assignment to a column, a negative indicator
variable is used to indicate that a null value should be assigned.
See the DB2 UDB for AS/400 SQL Reference book for more information on the use
of indicator variables.
Indicator variables are declared in the same way as host variables and the
declarations of the two can be mixed in any way that seems appropriate to the
programmer.
Example
Given the statement:
*...1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
C/EXEC SQL FETCH CLS_CURSOR INTO :CLSCD,
C+ :DAY :DAYIND,
C+ :BGN :BGNIND,
C+ :END :ENDIND
C/END-EXEC
DIND_ARRAY DS OCCURS(10)
D INDS 4B 0 DIM(4)
...
C* setup number of sqlda entries and length of the sqlda
C eval sqld = 4
C eval sqln = 4
C eval sqldabc = 336
C*
C* setup the first entry in the sqlda
C*
C eval sqltype = 453
C eval sqllen = 3
C eval sql_var(1) = sqlvar
C*
C* setup the second entry in the sqlda
C*
C eval sqltype = 453
C eval sqllen = 29
C eval sql_var(2) = sqlvar
...
C*
C* setup the forth entry in the sqlda
C*
C eval sqltype = 453
C eval sqllen = 3
C eval sql_var(4) = sqlvar
...
C/EXEC SQL
C+ DECLARE C1 FOR
C+ SELECT *
C+ FROM CORPDATA.DEPARTMENT
C/END-EXEC
...
C/EXEC SQL
C+ FETCH C1 FOR 10 ROWS
C+ USING DESCRIPTOR :SQLDA
C+ INTO :DEPARTMENT:IND_ARRAY
C/END-EXEC
Chapter 16. Coding SQL Statements in ILE RPG for AS/400 Applications 323
324 DB2 UDB for AS/400 SQL Programming V4R4
Chapter 17. Coding SQL Statements in REXX Applications
REXX procedures do not have to be preprocessed. At runtime, the REXX
interpreter passes statements that it does not understand to the current active
command environment for processing. The command environment can be changed
to *EXECSQL to send all unknown statements to the database manager in two
ways:
1. CMDENV parameter on the STRREXPRC CL command
2. address positional parameter on the ADDRESS REXX command
The SQL/REXX interface uses the SQLCA in a manner consistent with the typical
SQL usage. However, the SQL/REXX interface maintains the fields of the SQLCA in
separate variables rather than in a contiguous data area. The variables that the
SQL/REXX interface maintains for the SQLCA are defined as follows:
SQLCODE The primary SQL return code.
SQLERRMC Error and warning message tokens.
SQLERRP Product code and, if there is an error, the name of
the module that returned the error.
SQLERRD.n Six variables (n is a number between 1 and 6)
containing diagnostic information.
SQLWARN.n Eleven variables (n is a number between 0 and 10)
containing warning flags.
SQLSTATE The alternate SQL return code.
Unlike the SQLCA, more than one SQLDA can be in a procedure, and an SQLDA
can have any valid name. Each SQLDA consists of a set of REXX variables with a
common stem, where the name of the stem is the descriptor-name from the
The SQL/REXX interface uses the SQLDA in a manner consistent with the typical
SQL usage. However, the SQL/REXX interface maintains the fields of the SQLDA in
separate variables rather than in a contiguous data area. See the DB2 UDB for
AS/400 SQL Reference book for more information on the SQLDA.
Each SQL statement in a REXX procedure must begin with EXECSQL (in any
combination of uppercase and lowercase letters), followed by either:
v The SQL statement enclosed in single or double quotes, or
v A REXX variable containing the statement. Note that a colon must not precede a
REXX variable when it contains an SQL statement.
For example:
EXECSQL “COMMIT”
is equivalent to:
rexxvar = “COMMIT”
EXECSQL rexxvar
The command follows normal REXX rules. For example, it can optionally be
followed by a semicolon (;) to allow a single line to contain more than one REXX
statement. REXX also permits command names to be included within single quotes,
for example:
'EXECSQL COMMIT'
The following SQL statements are not supported by the SQL/REXX interface:
Comments
Neither SQL comments (--) nor REXX comments are allowed in strings representing
SQL statements.
Including Code
Unlike the other host languages, support is not provided for including externally
defined statements.
Names
Any valid REXX name not ending in a period (.) can be used for a host variable.
The name must be 64 characters or less.
Variable names should not begin with the characters 'SQL', 'RDI', 'DSN', 'RXSQL',
or 'QRW'.
Nulls
Although the term null is used in both REXX and SQL, the term has different
meanings in the two languages. REXX has a null string (a string of length zero) and
a null clause (a clause consisting only of blanks and comments). The SQL null
value is a special value that is distinct from all non-null values and denotes the
absence of a (non-null) value.
Statement Labels
REXX command statements can be labeled as usual.
This can be used to detect errors and warnings issued by either the database
manager or by the SQL/REXX interface.
v The SIGNAL ON ERROR and SIGNAL ON FAILURE facilities can be used to
detect errors (negative RC values), but not warnings.
These rules define either numeric, character, or graphic values. A numeric value
can be used as input to a numeric column of any type. A character value can be
used as input to a character column of any type, or to a date, time, or timestamp
column. A graphic value can be used as input to a graphic column of any type.
Table 34. Determining Data Types of Host Variables in REXX
SQL Type SQL Type
Host Variable Contents Assumed Data Type Code Description
Undefined Variable Variable for which a value None Data that is
has not been assigned not valid was
detected.
A string with leading and trailing apostrophes (’) or Varying-length character 448/449 VARCHAR(n)
quotation marks (″), which has length n after removing string
the two delimiters,
13. The byte immediately following the leading apostrophe is a X'0E' shift-out, and the byte immediately preceding the trailing
apostrophe is a X'0F' shift-in.
causes REXX to set the variable stringvar to the string of characters 100 (without
the apostrophes). This is evaluated by the SQL/REXX interface as the number 100,
and it is passed to SQL as such.
causes REXX to set the variable stringvar to the string of characters '100' (with the
apostrophes). This is evaluated by the SQL/REXX interface as the string 100, and it
is passed to SQL as such.
Unlike other languages, a valid value must be specified in the host variable even if
its associated indicator variable contains a negative value.
For more information on indicator variables see the DB2 UDB for AS/400 SQL
Reference book.
14. SQL statements in a REXX procedure are not precompiled and compiled.
To get complete diagnostic information when you precompile, specify either of the
following:
v OPTION(*SOURCE *XREF) for CRTSQLxxx (where xxx=CBL, PLI, or RPG)
v OPTION(*XREF) OUTPUT(*PRINT) for CRTSQLxxx (where xxx=CI, CPPI, CBLI,
or RPGI) or for CVTSQLCPP
The SQL precompiler assumes that the host language statements are syntactically
correct. If the host language statements are not syntactically correct, the
precompiler may not correctly identify SQL statements and host variable
declarations. There are limits on the forms of source statements that can be passed
through the precompiler. Literals and comments that are not accepted by the
application language compiler, can interfere with the precompiler source scanning
process and cause errors.
You can use the SQL INCLUDE statement to get secondary input from the file that
is specified by the INCFILE parameter of the CRTSQLxxx 15 and CVTSQLCPP
command. The SQL INCLUDE statement causes input to be read from the specified
member until it reaches the end of the member. The included member may not
contain other precompiler INCLUDE statements, but can contain both application
program and SQL statements.
Another preprocessor may process source statements before the SQL precompiler.
However, any preprocessor run before the SQL precompile must be able to pass
through SQL statements.
If mixed DBCS constants are specified in the application program source, the
source file must be a mixed CCSID.
You can specify many of the precompiler options in the input source member by
using the SQL SET OPTION statement. See the DB2 UDB for AS/400 SQL
Reference book for the SET OPTION syntax.
15. The xxx in this command refers to the host language indicators: CBL for the COBOL for AS/400 language, CBLI for the ILE
COBOL for AS/400 language, PLI for the AS/400 PL/I language, CI for the ILE C for AS/400 language, RPG for the RPG for
AS/400 language, RPGI for the ILE RPG for AS/400 language, CPPI for the ILE C++/400 language.
The SQL precompiler will process SQL statements using the source CCSID. This
affects variant characters the most. For example, the not sign (¬) is located at 'BA'X
in CCSID 500. Prior to Version 2 Release 1.1, SQL looked for the not sign (¬) in the
location '5F'X in CCSID 37. This means that if the CCSID of your source file is 500,
SQL expects the not sign (¬) to be located at 'BA'X.
If the source file CCSID is 65535, SQL processes variant characters as if they had
a CCSID of 37. This means that SQL looks for the not sign (¬) at '5F'X.
Listing
The output listing is sent to the printer file that is specified by the PRTFILE
parameter of the CRTSQLxxx or CVTSQLCPP command. The following items are
written to the printer file:
v Precompiler options
Options specified in the CRTSQLxxx or CVTSQLCPP command.
v Precompiler source
This output supplies precompiler source statements with the record numbers that
are assigned by the precompiler, if the listing option is in effect.
v Precompiler cross-reference
If *XREF was specified in the OPTION parameter, this output supplies a
cross-reference listing. The listing shows the precompiler record numbers of SQL
statements that contain the referred to host names and column names.
v Precompiler diagnostics
This output supplies diagnostic messages, showing the precompiler record
numbers of statements in error.
The output to the printer file will use a CCSID value of 65535. The data will not
be converted when it is written to the printer file.
Chapter 18. Preparing and Running a Program with SQL Statements 335
The SQL precompiler uses the CRTSRCPF command to create the output source
file. If the defaults for this command have changed, then the results may be
unpredictable. If the source file is created by the user, not the SQL precompiler, the
file’s attributes may be different as well. It is recommended that the user allow SQL
to create the output source file. Once it has been created by SQL, it can be reused
on later precompiles.
5769ST1 V4R4M0 990521 Create SQL COBOL Program CBLTEST1 04/01/98 11:14:21 Page 1
Source type...............COBOL
Program name..............CORPDATA/CBLTEST1
Source file...............CORPDATA/SRC
Member....................CBLTEST1
To source file............QTEMP/QSQLTEMP
1
Options...................*SRC *XREF *SQL
Target release............V4R4M0
INCLUDE file..............*LIBL/*SRCFILE
Commit....................*CHG
Allow copy of data........*YES
Close SQL cursor..........*ENDPGM
Allow blocking............*READ
Delay PREPARE.............*NO
Generation level..........10
Printer file..............*LIBL/QSYSPRT
Date format...............*JOB
Date separator............*JOB
Time format...............*HMS
Time separator ...........*JOB
Replace...................*YES
Relational database.......*LOCAL
User .....................*CURRENT
RDB connect method........*DUW
Default Collection........*NONE
Package name..............*PGMLIB/*PGM
Dynamic User Profile......*USER
User Profile..............*NAMING
Sort Sequence.............*JOB
Language ID...............*JOB
IBM SQL flagging..........*NOFLAG
ANS flagging..............*NONE
Text......................*SRCMBRTXT
Source file CCSID.........65535
Job CCSID.................65535
2
Source member changed on 04/01/98 10:16:44
1
A list of the options you specified when the SQL precompiler was called.
2
The date the source member was last changed.
1
Record number assigned by the precompiler when it reads the source record. Record numbers are
used to identify the source record in error messages and SQL run-time processing.
2
Sequence number taken from the source record. The sequence number is the number seen when
you use the source entry utility (SEU) to edit the source member.
3
Date when the source record was last changed. If Last is blank, it indicates that the record has not
been changed since it was created.
Chapter 18. Preparing and Running a Program with SQL Statements 337
5769ST1 V4R4M0 990521 Create SQL COBOL Program CBLTEST1 04/01/98 11:14:21 Page 3
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
64 *************************************************************** 6400
65 * Fetch all result rows * 6500
66 *************************************************************** 6600
67 PERFORM A010-FETCH-PROCEDURE THROUGH A010-FETCH-EXIT 6700
68 UNTIL SQLCODE IS = 100. 6800
69 *************************************************************** 6900
70 * Close cursor * 7000
71 *************************************************************** 7100
72 EXEC SQL 7200
73 CLOSE CURS 7300
74 END-EXEC. 7400
75 CLOSE OUTFILE. 7500
76 STOP RUN. 7600
77 *************************************************************** 7700
78 * Fetch a row and move the information to the output record. * 7800
79 *************************************************************** 7900
80 A010-FETCH-PROCEDURE. 8000
81 MOVE SPACES TO REC-1. 8100
82 EXEC SQL 8200
83 FETCH CURS INTO :AVG-RECORD 8300
84 END-EXEC. 8400
85 IF SQLCODE IS = 0 8500
86 MOVE WORKDEPT TO DEPT-NO 8600
87 MOVE AVG-SALARY TO AVERAGE-SALARY 8700
88 MOVE AVG-EDUC TO AVERAGE-EDUCATION-LEVEL 8800
89 WRITE REC-1 AFTER ADVANCING 1 LINE. 8900
90 A010-FETCH-EXIT. 9000
91 EXIT. 9100
92 *************************************************************** 9200
93 * An SQL error occurred. Move the error number to the error * 9300
94 * record and stop running. * 9400
95 *************************************************************** 9500
96 B000-SQL-ERROR. 9600
97 MOVE SPACES TO ERROR-RECORD. 9700
98 MOVE SQLCODE TO ERROR-CODE. 9800
99 MOVE "AN SQL ERROR HAS OCCURRED" TO ERROR-MESSAGE. 9900
100 WRITE ERROR-RECORD AFTER ADVANCING 1 LINE. 10000
101 CLOSE OUTFILE. 10100
102 STOP RUN. 10200
* * * * * E N D O F S O U R C E * * * * *
1
Data names are the symbolic names used in source statements.
2
The define column specifies the line number at which the name is defined. The line number is
generated by the SQL precompiler. **** means that the object was not defined or the precompiler
did not recognize the declarations.
3
The reference column contains two types of information:
v What the symbolic name is defined as 4
v The line numbers where the symbolic name occurs 5
If the symbolic name refers to a valid host variable, the data-type 6
or data-structure 7
is also
noted.
5769ST1 V4R4M0 990521 Create SQL COBOL Program CBLTEST1 04/01/98 11:14:21 Page 5
CROSS REFERENCE
SEX 55 CHARACTER(1) COLUMN IN CORPDATA.EMPLOYEE
WORKDEPT 33 CHARACTER(3) IN AVG-RECORD
WORKDEPT **** COLUMN
54 56
WORKDEPT 55 CHARACTER(3) COLUMN IN CORPDATA.EMPLOYEE
No errors found in source
102 Source records processed
* * * * * E N D O F L I S T I N G * * * * *
Chapter 18. Preparing and Running a Program with SQL Statements 339
Non-ILE Precompiler Commands
DB2 UDB Query Manager and SQL Development Kit includes non-ILE precompiler
commands for the following host languages: CRTSQLCBL (for COBOL for AS/400),
CRTSQLPLI (for AS/400 PL/I), and CRTSQLRPG (for RPG III, which is part of RPG
for AS/400). Some options only apply to certain languages. For example, the
options *APOST and *QUOTE are unique to COBOL. They are not included in the
commands for the other languages. Refer to “Appendix D. DB2 UDB for AS/400 CL
Command Descriptions” on page 645 for more information.
You can interrupt the call to the host language compiler by specifying *NOGEN on
the OPTION parameter of the precompiler command. *NOGEN specifies that the
host language compiler will not be called. Using the object name in the CRTSQLxxx
command as the member name, the precompiler created the source member in the
output source file (specified as the TOSRCFILE parameter on the CRTSQLxxx
command). You now can explicitly call the host language compilers, specify the
source member in the output source file, and change the defaults. If the precompile
and compile were done as separate steps, the CRTSQLPKG command can be
used to create the SQL package for a distributed program.
Chapter 18. Preparing and Running a Program with SQL Statements 341
v For COBOL, the *QUOTE or *APOST is passed on the CRTBNDCBL or the
CRTCBLMOD commands.
v FOR RPG and COBOL, the SRTSEQ and LANGID parameter from the
CRTSQLxxx command is specified on the CRTxxxMOD and CRTBNDxxx
commands.
| v For COBOL, CVTOPT(*VARCHAR *DATETIME *PICGGRAPHIC *FLOAT *DATE
| *TIME *TIMESTAMP) is always specified on the CRTCBLMOD and CRTBNDCBL
| commands.
v For RPG, if OPTION(*CVTDT) is specified, then CVTOPT(*DATETIME) is
specified on the CRTRPGMOD and CRTBNDRPG commands.
You can interrupt the call to the host language compiler by specifying *NOGEN on
the OPTION parameter of the precompiler command. *NOGEN specifies that the
host language compiler is not called. Using the specified program name in the
CRTSQLxxx command as the member name, the precompiler creates the source
member in the output source file (TOSRCFILE parameter). You can now explicitly
call the host language compilers, specify the source member in the output source
file, and change the defaults. If the precompile and compile were done as separate
steps, the CRTSQLPKG command can be used to create the SQL package for a
distributed program.
If the program or service program is created later, the USRPRF parameter may not
be set correctly on the CRTBNDxxx, Create Program (CRTPGM), or Create Service
Program (CRTSRVPGM) command. The SQL program runs predictably only after
the USRPRF parameter is corrected. If system naming is used, then the USRPRF
parameter must be set to *USER. If SQL naming is used, then the USRPRF
parameter must be set to *OWNER.
This command copies myapp.sqx (your source) to the AS/400 into the
qsys.lib/mylib.lib/myfile.file/myapp.mbr directory. This is the same as the AS/400
file system MYLIB/MYFILE (MYAPP) member.
Alternately, you can leave the source on the AS/400 and run the compiler
against it there.
6. Run the C++ compiler and create the final module or program. If the output
source member is still on the AS/400:
iccas /c x:\qsys.lib\mylib.lib\mytosrcfile.file\myapp.mbr
Note that the program must be created on the AS/400 where the precompile
was run since there is some additional SQL information that was created by the
precompiler that is needed for the final executable object.
When the SQL precompiler does not recognize host variables, try compiling the
source. The compiler will not recognize the EXEC SQL statements, ignore these
errors. Verify that the compiler interprets the host variable declaration as defined by
the SQL precompiler for that language.
Chapter 18. Preparing and Running a Program with SQL Statements 343
During a COBOL Compile
If EXEC SQL starts before column 12, the SQL precompiler will not recognize the
statement as an SQL statement. Consequently, it will be passed as is to the
compiler.
For more information, see the specific programming examples in Chapter 12.
Coding SQL Statements in C and C++ Applications, through Chapter 17. Coding
SQL Statements in REXX Applications.
Binding an Application
Before you can run your application program, a relationship between the program
and any specified tables and views must be established. This process is called
binding. The result of binding is an access plan.
The access plan is a control structure that describes the actions necessary to
satisfy each SQL request. An access plan contains information about the program
and about the data the program intends to use.
For a nondistributed SQL program, the access plan is stored in the program. For a
distributed SQL program (where the RDB parameter was specified on the
CRTSQLxxx or CVTSQLCPP commands), the access plan is stored in the SQL
package at the specified relational database.
SQL automatically attempts to bind and create access plans when the program
object is created. For non-ILE compiles, this occurs as the result of a successful
CRTxxxPGM. For ILE compiles, this occurs as the result of a successful
CRTBNDxxx, CRTPGM, or CRTSRVPGM command. If DB2 UDB for AS/400
detects at run time that an access plan is not valid (for example, the referenced
tables are in a different library) or detects that changes have occurred to the
database that may improve performance (for example, the addition of indexes), a
new access plan is automatically created. Binding does three things:
1. It revalidates the SQL statements using the description in the database.
During the bind process, the SQL statements are checked for valid table, view,
and column names. If a specified table or view does not exist at the time of the
precompile or compile, the validation is done at run time. If the table or view
does not exist at run time, a negative SQLCODE is returned.
2. It selects the index needed to access the data your program wants to
process. In selecting an index, table sizes, and other factors are considered,
when it builds an access plan. It considers all indexes available to access the
data and decides which ones (if any) to use when selecting a path to the data.
3. It attempts to build access plans. If all the SQL statements are valid, the bind
process then builds and stores access plans in the program.
If the characteristics of a table or view your program accesses have changed, the
access plan may no longer be valid. When you attempt to run a program that
contains an access plan that is not valid, the system automatically attempts to
rebuild the access plan. If the access plan cannot be rebuilt, a negative SQLCODE
Program References
All collections, tables, views, SQL packages, and indexes referenced in SQL
statements in an SQL program are placed in the object information repository (OIR)
of the library when the program is created.
The Print SQL Information (PRTSQLINF) command can also be used to determine
some of the options that were specified on the SQL precompile.
Chapter 18. Preparing and Running a Program with SQL Statements 345
Running a Program with Embedded SQL
Running a host language program with embedded SQL statements, after the
precompile and compile have been successfully done, is the same as running any
host program. Type:
CALL pgm-name
on the system command line. For more information on running programs, see the
| CL Programming book.
| Note: After installing a new release, users may encounter message CPF2218 in
| QHST using any Structured Query Language (SQL) program if the user does
| not have *CHANGE authority to the program. Once a user with *CHANGE
| authority calls the program, the access plan is updated and the message will
| be issued.
Override Considerations
You can use overrides (specified by the OVRDBF command) to direct a reference
to a different table or view or to change certain operational characteristics of the
program or SQL Package. The following parameters are processed if an override is
specified:
TOFILE
MBR
SEQONLY
INHWRT
WAITRCD
All other override parameters are ignored. Overrides of statements in SQL
packages are accomplished by doing both of the following:
1. Specifying the OVRSCOPE(*JOB) parameter on the OVRDBF command
2. Sending the command to the application server by using the Submit Remote
Command (SBMRMTCMD) command
To override tables and views that are created with long names, you can create an
override using the system name that is associated with the table or view. When the
long name is specified in an SQL statement, the override is found using the
corresponding system name.
An alias is actually created as a DDM file. You can create an override that refers to
an alias name (DDM file). In this case, an SQL statement that refers to the file that
has the override actually uses the file to which the alias refers.
For more information on overrides, see the DB2 UDB for AS/400 Database
Programming book, and the Data Management book.
Chapter 18. Preparing and Running a Program with SQL Statements 347
348 DB2 UDB for AS/400 SQL Programming V4R4
Chapter 19. Using Interactive SQL
This chapter describes how to use interactive SQL to run SQL statements and use
the prompt function. Overview information and tips on using interactive SQL are
provided. If you want to learn how to use SQL, you should see Chapter 2. Getting
Started with SQL. Special considerations for using interactive SQL with a remote
connection are covered in “Accessing Remote Databases with Interactive SQL” on
page 359.
You can see help on a message by positioning the cursor on the message and
pressing F1=Help.
The Enter SQL Statements display appears. This is the main interactive SQL
display. From this display, you can enter SQL statements and use:
v F4=prompt
v F13=Session services
v F16=Select collections
v F17=Select tables
v F18=Select columns
Enter SQL Statements
Bottom
F3=Exit F4=Prompt F6=Insert line F9=Retrieve F10=Copy line
F12=Cancel F13=Services F24=More keys
Bottom
Note: If you are using the system naming convention, the names in parentheses
appear instead of the names shown above.
Interactive SQL supplies a unique session-ID consisting of your user ID and the
current work station ID. This session-ID concept allows multiple users with the
same user ID to use interactive SQL from more than one work station at the same
time. Also, more than one interactive SQL session can be run from the same work
station at the same time from the same user ID.
If an SQL session exists and is being re-entered, any parameters specified on the
STRSQL command are ignored. The parameters from the existing SQL session are
used.
In the statement entry function, you type or prompt for the entire SQL statement
and then submit it for processing by pressing the Enter key.
Typing Statements
The statement you type on the command line can be one or more lines long. You
cannot type comments for the SQL statement in interactive SQL. When the
statement has been processed, the statement and the resulting message are
moved upward on the display. You can then enter another statement.
If a statement is recognized by SQL but contains a syntax error, the statement and
the resulting text message (syntax error) are moved upward on the display. In the
input area, a copy of the statement is shown with the cursor positioned at the
syntax error. You can place the cursor on the message and press F1=Help for more
information about the error.
You can page through previous statements, commands, and messages. Press
F9=Retrieve with your cursor on a previous statement to place a copy of that
statement in the input area. If you need more room to type an SQL statement, page
down on the display.
Prompting
The prompt function helps you supply the necessary information for the syntax of
the statement you want to use. The prompt function can be used in any of the three
statement processing modes: *RUN, *VLD, and *SYN.
Bottom
Type choices, press Enter.
v Press F4=Prompt before typing anything on the Enter SQL Statements display.
You are shown a list of statements. The list of statements varies and depends on
the current interactive SQL statement processing mode. For syntax check mode
with a language other than *NONE, the list includes all SQL statements. For run
and validate modes, only statements that can be run in interactive SQL are
shown. You can select the number of the statement you want to use. The system
prompts you for the statement you selected.
If you press F4=Prompt without typing anything, the following display appears:
Select SQL Statement
1. ALTER TABLE
2. CALL
3. COMMENT ON
4. COMMIT
5. CONNECT
6. CREATE ALIAS
7. CREATE COLLECTION
8. CREATE INDEX
9. CREATE PROCEDURE
10. CREATE TABLE
11. CREATE VIEW
12. DELETE
13. DISCONNECT
14. DROP ALIAS
More...
Selection
__
F3=Exit F12=Cancel
If you press F21=Display Statement on a prompt display, the prompter displays the
formatted SQL statement as it was filled in to that point.
When Enter is pressed within prompting, the statement that was built through the
prompt screens is inserted into the session. If the statement processing mode is
*RUN, the statement is run. The prompter remains in control if an error is
encountered.
Subqueries
Subqueries can be selected on any display that has a WHERE or HAVING clause.
To see the subquery display, press F9=Specify subquery when the cursor is on a
WHERE or HAVING input line. A display appears that allows you to type in
subselect information. If the cursor is within the parentheses of the subquery when
F9 is pressed, the subquery information is filled in on the next display. If the cursor
is outside the parentheses of the subquery, the next display is blank. For more
information on subqueries, see “Using Subqueries” on page 84.
To enter a column name longer than 18 characters, press F20=Display entire name.
A window with enough space for a 30 character name will be displayed.
The editing keys, F6=Insert line, F10=Copy line, and F14=Delete line, can be used
to add and delete entries in the column definition list.
As an example, suppose the following WHERE condition were entered. The shift
characters are shown here at the beginning and end of the string sections on each
of the two lines.
When Enter is pressed, the character string is put together, removing the extra shift
characters. The statement would look like this on the Enter SQL Statements
display:
SELECT * FROM TABLE1 WHERE COL1 = '<AABBCCDDEEFFGGHHIIJJKKLLMMNNOOPPQQRRSS>'
On a list, you can select one or more items, numerically specifying the order in
which you want them to appear in the statement. When the list function is exited,
the selections you made are inserted at the position of the cursor on the display
you came from.
Always select the list you are primarily interested in. For example, if you want a list
of columns, but you believe that the columns you want are in a table not currently
selected, press F18=Select columns. Then, from the column list, press F17 to
change the table. If the table list were selected first, the table name would be
inserted into your statement. You would not have a choice for selecting columns.
You can request a list at any time while typing an SQL statement on the Enter SQL
Statements display. The selections you make from the lists are inserted on the
Enter SQL Statements display. They are inserted where the cursor is located in the
numeric order that you specified on the list display. Although the selected list
information is added for you, you must type the keywords for the statement.
The list function tries to provide qualifications that are necessary for the selected
columns, tables, and SQL packages. However, sometimes the list function cannot
determine the intent of the SQL statement. You need to review the SQL statement
and verify that the selected columns, tables, and SQL packages are properly
qualified.
Note: The example shows lists that are not on your AS/400 system. They are used
as an example only.
4. Press F17=Select tables to obtain a list of tables, because you want the table
name to follow FROM.
Instead of a list of tables appearing as you expected, a list of collections
appears (the Select and Sequence Collections display). You have just entered
the SQL session and have not selected a collection to work with.
5. Type a 1 in the Seq column next to YOURCOLL2 collection.
Select and Sequence Collections
6. Press Enter.
The Select and Sequence Tables display appears, showing the tables existing
in the YOURCOLL2 collection.
7. Type a 1 in the Seq column next to PEOPLE table.
Select and Sequence Tables
8. Press Enter.
Once you have used the list function, the values you selected remain in effect until
you change them or until you change the list of collections on the Change Session
Attributes display.
Option 2 (Print current session) accesses the Change Printer display, which lets you
print the current session immediately and then continue working. You are prompted
for printer information. All the SQL statements you entered and all the messages
displayed are printed just as they appear on the Enter SQL Statements display.
Option 3 (Remove all entries from current session) lets you remove all the SQL
statements and messages from the Enter SQL Statements display and the session
history. You are prompted to ensure that you really want to delete the information.
Option 4 (Save session in source file) accesses the Change Source File display,
which lets you save the session in a source file. You are prompted for the source
file name. This function lets you embed the source file into a host language
program by using the source entry utility (SEU).
For example, you saved a session on workstation 1 and saved another session on
workstation 2 and you are currently working at workstation 1. Interactive SQL will
first attempt to resume the session saved for workstation 1. If that session is
currently in use, interactive SQL will then attempt to resume the session that was
saved for workstation 2. If that session is also in use, then the system will create a
second session for workstation 1.
However, suppose you are working at workstation 3 and want to use the ISQL
session associated with workstation 2. You then may need to first delete the
session from workstation 1 by using option 2 (Exit without saving session) on the
Exit Interactive SQL display.
If you choose to delete the old session and continue with the new session, the
parameters you specified when you entered STRSQL are used. If you choose to
When you are connecting to a non-DB2 UDB for AS/400 application server, some
session attributes are changed to attributes that are supported by that application
server. The following table shows the attributes that change.
Table 35. Values Table
Session Attribute Original Value New Value
Date Format *YMD *ISO
*EUR
*DMY *USA
*USA
*MDY
*JUL
Time Format *HMS with a : separator *JIS
*HMS with any other *EUR
separator
Commitment Control *CHG, *CS Repeatable Read
*NONE
*ALL
Naming Convention *SYS *SQL
Allow Copy Data *NO, *YES *OPTIMIZE
Data Refresh *ALWAYS *FORWARD
Decimal Point *SYSVAL *PERIOD
Sort Sequence Any value other than *HEX *HEX
After the connection is completed, a message is sent stating that the session
attributes have been changed. The changed session attributes can be displayed by
using the session services display. While interactive SQL is running, no other
connection can be established for the default activation group.
Lists of collections and tables are available when you are connected to the local
relational database. Lists of columns are available only when you are connected to
a relational database manager that supports the DESCRIBE TABLE statement.
When you exit interactive SQL with connections that have pending changes or
connections that use protected conversations, the connections remain. If you do not
perform additional work over the connections, the connections are ended during the
next COMMIT or ROLLBACK operation. You can also end the connections by doing
a RELEASE ALL and a COMMIT before exiting interactive SQL.
Using interactive SQL for remote access to non-DB2 UDB for AS/400 application
servers can require some setup. For more information, see the Distributed
Database Programming book.
The SQL statement processor allows SQL statements to be executed from a source
member. The statements in the source member can be run repeatedly, or changed,
without compiling the source. This makes the setup of a database environment
easier. The statements that can be used with the SQL statement processor are:
v ALTER TABLE
v CALL
v COMMENT ON
v COMMIT
v CREATE ALIAS
v CREATE COLLECTION
| v CREATE DISTINCT TYPE
| v CREATE FUNCTION
v CREATE INDEX
v CREATE PROCEDURE
v CREATE SCHEMA
v CREATE TABLE
v CREATE VIEW
v DELETE
v DROP
v GRANT (Package Privileges)
v GRANT (Procedure Privileges)
v GRANT (Table Privileges)
v INSERT
v LABEL ON
v LOCK TABLE
v RENAME
v REVOKE (Package Privileges)
v REVOKE (Procedure Privileges)
v REVOKE (Table Privileges)
v ROLLBACK
| v SET PATH
v SET TRANSACTION
v UPDATE
In the source member, statements end with a semicolon and do not begin with
EXEC SQL. If the record length of the source member is longer than 80, only the
first 80 characters will be read. Comments in the source member can be either line
comments or block comments. Line comments begin with a double hyphen (−−) and
end at the end of the line. Block comments start with /* and can continue across
many lines until the next */ is reached. Block comments can be nested. Only SQL
statements and comments are allowed in the source file. The output listing and the
resulting messages for the SQL statements are sent to a print file. The default print
file is QSYSPRT.
The SET TRANSACTION statement can be used within the source member to
override the level of commitment control specified on the RUNSQLSTM command.
Note: The job must be at a unit of work boundary to use the SQL statement
processor with commitment control.
The second section of the CREATE SCHEMA statement can contain from zero to
any number of the following statements:
v COMMENT ON
| v CREATE DISTINCT TYPE
v CREATE TABLE
These statements follow directly after the first section of the statement. The
statements and sections are not separated by semicolons. If other SQL statements
follow this schema definition, the last statement in the schema must be ended by a
semicolon.
All objects created or referenced in the second part of the schema statement must
be in the collection that was created for the schema. All unqualified references are
implicitly qualified by the collection that was created. All qualified references must
be qualified by the created collection.
5769ST1 V4R4M0 990521 Run SQL Statements SCHEMA 04/01/98 15:35:18 Page 1
Source file...............CORPDATA/SRC
Member....................SCHEMA
Commit....................*NONE
Naming....................*SYS
Generation level..........10
Date format...............*JOB
Date separator............*JOB
Time format...............*HMS
Time separator ...........*JOB
Default Collection........*NONE
IBM SQL flagging..........*NOFLAG
ANS flagging..............*NONE
Decimal point.............*JOB
Sort Sequence.............*JOB
Language ID...............*JOB
Printer file..............*LIBL/QSYSPRT
Source file CCSID.........65535
Job CCSID.................0
Statement processing......*RUN
Allow copy of data........*OPTIMIZE
Allow blocking............*READ
Source member changed on 04/01/98 11:54:10
5769ST1 V4R4M0 990521 Run SQL Statements SCHEMA 04/01/98 15:35:18 Page 3
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
MSG ID SEV RECORD TEXT
SQL7953 0 1 Position 1 Drop of DEPT in QSYS complete.
SQL7953 0 3 Position 3 Drop of MANAGER in QSYS complete.
SQL7952 0 5 Position 3 Collection DEPT created.
SQL7950 0 6 Position 8 Table EMP created in collection DEPT.
SQL7954 0 8 Position 8 Index EMPIND created on table EMP in DEPT.
SQL7966 0 10 Position 8 GRANT of authority to EMP in DEPT completed.
SQL7956 0 10 Position 40 1 rows inserted in EMP in DEPT.
SQL7952 0 13 Position 28 Collection
MANAGER created.
SQL7950 0 19 Position 9 Table EMP_SALARY created in collection
MANAGER.
SQL7951 0 21 Position 9 View LEVEL created in collection MANAGER.
SQL7954 0 23 Position 9 Index SALARYIND created on table EMP_SALARY
in MANAGER.
SQL7966 0 25 Position 9 GRANT of authority to LEVEL in MANAGER
completed.
SQL7966 0 25 Position 37 GRANT of authority to EMP_SALARY in MANAGER
completed.
Message Summary
Total Info Warning Error Severe Terminal
13 13 0 0 0 0
00 level severity errors found in source
* * * * * E N D O F L I S T I N G * * * * *
Security
All objects on the AS/400 system, including SQL objects, are managed by the
system security function. Users may authorize SQL objects through either the SQL
GRANT and REVOKE statements or the CL commands Edit Object Authority
(EDTOBJAUT), Grant Object Authority (GRTOBJAUT), and Revoke Object Authority
(RVKOBJAUT). For more information on system security and the use of the
GRTOBJAUT and RVKOBJAUT commands, see the Security - Reference book.
The SQL GRANT and REVOKE statements operate on SQL packages, SQL
procedures, tables, views, and the individual columns of tables and views.
Furthermore, SQL GRANT and REVOKE statements only grant private and public
authorities. In some cases, it is necessary to use EDTOBJAUT, GRTOBJAUT, and
RVKOBJAUT to authorize users to other objects, such as commands and programs.
For more information on the GRANT and REVOKE statements, see the DB2 UDB
for AS/400 SQL Reference book.
The authority checked for SQL statements depends on whether the statement is
static, dynamic, or being run interactively.
For interactive SQL statements, authority is checked against the authority of the
person processing the statement. Adopted authority is not used for interactive SQL
statements.
Authorization ID
The authorization ID identifies a unique user and is a user profile object on the
AS/400 system. Authorization IDs can be created using the system Create User
Profile (CRTUSRPRF) command.
Views
A view can prevent unauthorized users from having access to sensitive data. The
application program can access the data it needs in a table, without having access
to sensitive or restricted data in the table. A view can restrict access to particular
columns by not specifying those columns in the SELECT list (for example,
employee salaries). A view can also restrict access to particular rows in a table by
specifying a WHERE clause (for example, allowing access only to the rows
associated with a particular department number).
Auditing
DB2 UDB for AS/400 is designed to comply with the U.S. government C2 security
level. A key feature of that level is the ability to audit actions on the system. DB2
UDB for AS/400 uses the audit facilities managed by the system security function.
Auditing can be performed on an object level, user, or system level. The system
value QAUDCTL controls whether auditing is performed at the object or user level.
The Change User Audit (CHGUSRAUD) command and Change Object Audit
(CHGOBJAUD) command specify which users and objects are audited. The system
value QAUDLVL controls what types of actions are audited (for example,
authorization failures, creates, deletes, grants, revokes, etc.) For more information
on auditing see the Security - Reference book.
DB2 UDB for AS/400 can also audit row changes by using the DB2 UDB for AS/400
journal support.
In some cases, entries in the auditing journal will not be in the same order as they
occured. For example, a job that is running under commitment control deletes a
table, creates a new table with the same name as the one that was deleted, then
does a commit. This will be recorded in the auditing journal as a create followed by
a delete. This is because objects that are created are journalled immediately. An
object that is deleted under commitment control is hidden and not actually deleted
until a commit is done. Once the commit is done, the action is journaled.
Data Integrity
Data integrity protects data from being destroyed or changed by unauthorized
persons, system operation or hardware failures (such as physical damage to a
disk), programming errors, interruptions before a job is completed (such as a power
failure), or interference from running applications at the same time (such as
serialization problems). Data integrity is ensured by the following functions:
The DB2 UDB for AS/400 Database Programming book and the Backup and
Recovery book contain more information about each of these functions.
Concurrency
Concurrency is the ability for multiple users to access and change data in the
same table or view at the same time without risk of losing data integrity. This ability
is automatically supplied by the DB2 UDB for AS/400 database manager. Locks are
implicitly acquired on tables and rows to protect concurrent users from changing the
same data at precisely the same time.
Typically, DB2 UDB for AS/400 will acquire locks on rows to ensure integrity.
However, some situations require DB2 UDB for AS/400 to acquire a more exclusive
table level lock instead of row locks. For more information see “Commitment
Control” on page 369.
| For example, an update (exclusive) lock on a row currently held by one cursor can
| be acquired by another cursor in the same program (or in a DELETE or UPDATE
| statement not associated with the cursor). This will prevent a positioned UPDATE or
| positioned DELETE statement that references the first cursor until another FETCH
| is performed. A read (shared no-update) lock on a row currently held by one cursor
| will not prevent another cursor in the same program (or DELETE or UPDATE
| statement) from acquiring a lock on the same row.
Default and user-specifiable lock-wait time-out values are supported. DB2 UDB for
AS/400 creates tables, views, and indexes with the default record wait time (60
seconds) and the default file wait time (*IMMED). This lock wait time is used for
DML statements. You can change these values by using the CL commands Change
Physical File (CHGPF), Change Logical File (CHGLF), and Override Database File
(OVRDBF).
The lock wait time used for all DDL statements and the LOCK TABLE statement, is
the job default wait time (DFTWAIT). You can change this value by using the CL
commands Change Job (CHGJOB) or Change Class (CHGCLS).
In the event that a large record wait time is specified, deadlock detection is
provided. For example, assume one job has an exclusive lock on row 1 and another
job has an exclusive lock on row 2. If the first job attempts to lock row 2, it will wait
because the second job is holding the lock. If the second job then attempts to lock
row 1, DB2 UDB for AS/400 will detect that the two jobs are in a deadlock and an
error will be returned to the second job.
In order to improve performance, DB2 UDB for AS/400 will frequently leave the
open data path (ODP) open (for more information see “Chapter 25. Additional SQL
performance considerations” on page 459). This performance feature also leaves a
lock on tables referenced by the ODP, but does not leave any locks on rows. A lock
left on a table may prevent another job from performing an operation on that table.
In most cases, however, DB2 UDB for AS/400 will detect that other jobs are holding
locks and events will be signalled to those jobs. The event causes DB2 UDB for
AS/400 to close any ODPs (and release the table locks) that are associated with
that table and are currently only open for performance reasons. Note that the lock
wait time out must be large enough for the events to be signalled and the other jobs
to close the ODPs or an error will be returned.
Unless the LOCK TABLE statement is used to acquire table locks, or either
COMMIT(*ALL) or COMMIT(*RR) is used, data which has been read by one job
can be immediately changed by another job. Usually, the data that is read at the
time the SQL statement is executed and therefore it is very current (for example,
during FETCH). In the following cases, however, data is read prior to the execution
of the SQL statement and therefore the data may not be current (for example,
during OPEN).
v ALWCPYDTA(*OPTIMIZE) was specified and the optimizer determined that
making a copy of the data would perform better than not making a copy.
v Some queries require the database manager to create a temporary result table.
The data in the temporary result table will not reflect changes made after the
cursor was opened. A temporary result table is required when:
– The total length in bytes of storage for the columns specified in an ORDER
BY clause exceeds 2000 bytes.
– ORDER BY and GROUP BY clauses specify different columns or columns in
a different order.
– UNION or DISTINCT clauses are specified.
– ORDER BY or GROUP BY clauses specify columns which are not all from the
same table.
– Joining a logical file defined by the JOINDFT data definition specifications
(DDS) keyword with another file.
– Joining or specifying GROUP BY on a logical file which is based on multiple
database file members.
– The query contains a join in which at least one of the files is a view which
contains a GROUP BY clause.
– The query contains a GROUP BY clause which references a view that
contains a GROUP BY clause.
v A basic subquery is evaluated when the query is opened.
Journaling
The DB2 UDB for AS/400 journal support supplies an audit trail and forward and
backward recovery. Forward recovery can be used to take an older version of a
table and apply the changes logged on the journal to the table. Backward recovery
can be used to remove changes logged on the journal from the table.
The journal created in the SQL collection is normally the journal used for logging all
changes to SQL tables. You can, however, use the system journal functions to
journal SQL tables to a different journal. This may be necessary if a table in one
collection is a parent to a table in another collection. This is because DB2 UDB for
AS/400 requires that the parent and dependent file in a referential constraint be
journaled to the same journal when updates or deletes are performed to the parent
table.
A user can stop journaling on any table using the journal functions, but doing so
prevents an application from running under commitment control. If journaling is
stopped on a parent table of a referential constraint with a delete rule of NO
ACTION, CASCADE, SET NULL, or SET DEFAULT, all update and delete
operations will be prevented. Otherwise, an application is still able to function if you
have specified COMMIT(*NONE); however, this does not provide the same level of
integrity that journaling and commitment control provide.
Commitment Control
The DB2 UDB for AS/400 commitment control support provides a means to process
a group of database changes (updates, inserts, DDL operations, or deletes) as a
single unit of work (transaction). A commit operation guarantees that the group of
operations is completed. A rollback operation guarantees that the group of
operations is backed out. A commit operation can be issued through several
different interfaces. For example,
v An SQL COMMIT statement
v A CL COMMIT command
v A language commit statement (such as an RPG COMMIT statement)
A rollback operation can be issued through several different interfaces. For
example,
v An SQL ROLLBACK statement
v A CL ROLLBACK command
v A language rollback statement (such as an RPG ROLBK statement)
The only SQL statements that cannot be committed or rolled back are:
If commitment control was not already started when either an SQL statement is
executed with an isolation level other than COMMIT(*NONE) or a RELEASE
statement is executed, then DB2 UDB for AS/400 sets up the commitment control
environment by implicitly calling the CL command Start Commitment Control
(STRCMTCTL). DB2 UDB for AS/400 specifies NFYOBJ(*NONE) and
CMTSCOPE(*ACTGRP) parameters along with LCKLVL on the STRCMTCTL
command. The LCKLVL specified is the lock level on the COMMIT parameter on the
CRTSQLxxx, STRSQL, or RUNSQLSTM commands. In REXX, the LCKLVL
specified is the lock level on the SET OPTION statement. 16 You may use the
STRCMTCTL command to specify a different CMTSCOPE, NFYOBJ, or LCKLVL. If
you specify CMTSCOPE(*JOB) to start the job level commitment definition, DB2
UDB for AS/400 uses the job level commitment definition for programs in that
activation group.
Note: When using commitment control, the tables referred to in the application
program by Data Manipulation Language statements must be journaled.
For cursors that use column functions, GROUP BY, or HAVING, and are running
under commitment control, a ROLLBACK HOLD has no effect on the cursor’s
position. In addition, the following occurs under commitment control:
v If COMMIT(*CHG) and (ALWBLK(*NO) or (ALWBLK(*READ)) is specified for one
of these cursors, a message (CPI430B) is sent that says COMMIT(*CHG)
requested but not allowed.
v If COMMIT(*ALL), COMMIT(*RR), or COMMIT(*CS) with the KEEP LOCKS
clause is specified for one of the cursors, DB2 UDB for AS/400 will lock all
referenced tables in shared mode (*SHRNUP). The lock prevents concurrent
application processes from executing any but read-only operations on the named
table. A message (either SQL7902 or CPI430A) is sent that says COMMIT(*ALL),
COMMIT(*RR), or COMMIT(*CS) with the KEEP LOCKS clause is specified for
one of the cursors requested but not allowed. Message SQL0595 may also be
sent.
If COMMIT(*RR) is requested, the tables will be locked until the query is closed. If
the cursor is read only, the table will be locked (*SHRNUP). If the cursor is in
16. Note that the LCKLVL specified is only the default lock level. After commitment control is started, the SET TRANSACTION SQL
statement and the lock level specified on the COMMIT parameter on the CRTSQLxxx, STRSQL, or RUNSQLSTM commands will
override the default lock level.
If an isolation level other then COMMIT(*NONE) was specified and the application
issues a ROLLBACK or the activation group ends normally (and the commitment
definition is not *JOB), all updates, inserts, deletes, and DDL operations made
within the unit of work are backed out. If the application issues a COMMIT or the
activation group ends normally, all updates, inserts, deletes, and DDL operations
made within the unit of work are committed.
DB2 UDB for AS/400 uses locks on rows to keep other jobs from accessing
changed data before a unit of work completes. If COMMIT(*ALL) is specified, read
locks on rows fetched are also used to prevent other jobs from changing data that
was read before a unit of work completes. This will not prevent other jobs from
reading the unchanged records. This ensures that, if the same unit of work rereads
a record, it gets the same result. Read locks do not prevent other jobs from fetching
the same rows.
Commitment control will allow up to 512 files for each journal to be open under
commitment control or closed with pending changes in a unit of work.
COMMIT HOLD and ROLLBACK HOLD allows you to keep the cursor open and
start another unit of work without issuing an OPEN again. The HOLD value is not
available when you are connected to a remote database that is not on an AS/400
system. However, the WITH HOLD option on DECLARE CURSOR may be used to
keep the cursor open after a COMMIT. This type of cursor is supported when you
are connected to a remote database that is not on an AS/400 system. Such a
cursor is closed on a rollback.
Table 36. Record Lock Duration
COMMIT Parameter
SQL Statement (See note 6) Duration of Record Locks Lock Type
SELECT INTO *NONE No locks
SET variable *CHG No locks
VALUES INTO *CS (See note 8) Row locked when read and released READ
*ALL (See note 2) From read until ROLLBACK or COMMIT READ
FETCH (read-only *NONE No locks
cursor) *CHG No locks
*CS (See note 8) From read until the next FETCH READ
*ALL (See note 2) From read until ROLLBACK or COMMIT READ
Atomic Operations
When running under COMMIT(*CHG), COMMIT(*CS), or COMMIT(*ALL), all
operations are guaranteed to be atomic. That is, they will complete or they will
appear not to have started. This is true regardless of when or how the function was
ended or interrupted (such as power failure, abnormal job end, or job cancel).
The following data definition statements are not atomic because they involve more
than one DB2 UDB for AS/400 database operation:
CREATE ALIAS
CREATE COLLECTION
| CREATE DISTINCT TYPE
| CREATE FUNCTION
| CREATE PROCEDURE
CREATE TABLE
CREATE VIEW
CREATE INDEX
CREATE SCHEMA
DROP ALIAS
DROP COLLECTION
| DROP DISTINCT TYPE
| DROP FUNCTION
| DROP PROCEDURE
DROP SCHEMA
RENAME (See note 1)
Notes:
1. RENAME is atomic only if the name or the system name is changed. When
both are changed, the RENAME is not atomic.
For example, a CREATE TABLE can be interrupted after the DB2 UDB for AS/400
physical file has been created, but before the member has been added. Therefore,
in the case of create statements, if an operation ends abnormally, you may have to
drop the object and then create it again. In the case of a DROP COLLECTION
statement, you may have to drop the collection again or use the CL command
Delete Library (DLTLIB) to remove the remaining parts of the collection.
DB2 UDB for AS/400 will enforce the validity of the constraint during any DML (data
manipulation language) statement. Certain operations (such as restore of the
dependent table), however, cause the validity of the constraint to be unknown. In
this case, DML statements may be prevented until DB2 UDB for AS/400 has verified
the validity of the constraint.
v Unique constraints are implemented with indexes. If an index that implements a
unique constraint is invalid, the Edit Rebuild of Access Paths (EDTRBDAP)
command can be used to display any indexes that currently require rebuild.
v If DB2 UDB for AS/400 does not currently know whether a referential constraint
or check constraint is valid, the constraint is considered to be in a check pending
state. The Edit Check Pending Constraints (EDTCPCST) command can be used
to display any indexes that currently require rebuild.
For more information on constraints see the DB2 UDB for AS/400 Database
Programming book.
Save/Restore
| The AS/400 save/restore functions are used to save tables, views, indexes,
| journals, journal receivers, SQL packages, SQL procedures, user-defined functions,
| user-defined types, and collections on disk (save file) or to some external media
| (tape or diskette). The saved versions can be restored onto any AS/400 system at
| some later time. The save/restore function allows an entire collection, selected
| objects, or only objects changed since a given date and time to be saved. All
| information needed to restore an object to its previous state is saved. This function
| can be used to recover from damage to individual tables by restoring the data with
| a previous version of the table or the entire collection.
| When a program that was created for an SQL procedure or a service program that
| was created for an SQL function or a sourced function is restored, it is automatically
| added to the SYSROUTINES and SYSPARMS catalogs, as long as the procedure
| or function does not already exist with the same signature. SQL programs created
| in QSYS will not be created as SQL procedures when restored. Additionally,
| external programs or service programs that were referenced on a CREATE
| PROCEDURE or CREATE FUNCTION statement may contain the information
| required to register the routine in SYSROUTINES. If the information exists and the
| signature is unique, the functions or procedures will also be added to
| SYSROUTINES and SYSPARMS when restored.
| When an *SQLUDT object is restored for a user-defined type, the user-defined type
| is automatically added to the SYSTYPES catalog. The appropriate functions needed
| to cast between the user-defined type and the source type are also created, as long
| as the type and functions do not already exist.
Either a distributed SQL program or its associated SQL package can be saved and
restored to any number of AS/400 systems. This allows any number of copies of the
SQL programs on different systems to access the same SQL package on the same
Damage Tolerance
The AS/400 system provides several mechanisms to reduce or eliminate damage
caused by disk errors. For example, mirroring, checksums, and RAID disks can all
reduce the possibility of disk problems. The DB2 UDB for AS/400 functions also
have a certain amount of tolerance to damage caused by disk errors or system
errors.
A DROP operation always succeeds, regardless of the damage. This ensures that
should damage occur, at least the table, view, SQL package, or index can be
deleted and restored or created again.
In the event that a disk error has damaged a small portion of the rows in a table,
the DB2 UDB for AS/400 database manager allows you to read rows still
accessible.
Index Recovery
DB2 UDB for AS/400 supplies several functions to deal with index recovery.
v System managed index protection
The EDTRCYAP CL command allows a user to instruct DB2 UDB for AS/400 to
guarantee that in the event of a system or power failure, the amount of time
required to recover all indexes on the system is kept below a specified time. The
system automatically journals enough information in a system journal to limit the
recovery time to the specified amount.
v Journaling of indexes
DB2 UDB for AS/400 supplies an index journaling function that makes it
unnecessary to rebuild an entire index due to a power or system failure. If the
index is journaled, the system database support automatically makes sure the
index is in synchronization with the data in the tables without having to rebuild it
from scratch. SQL indexes are not journaled automatically. You can, however,
use the CL command Start Journal Access Path (STRJRNAP) to journal any
index created by DB2 UDB for AS/400.
v Index rebuild
All indexes on the system have a maintenance option that specifies when an
index is maintained. SQL indexes are created with an attribute of *IMMED
maintenance.
In the event of a power failure or abnormal system failure, if indexes were not
protected by one of the previously described techniques, those indexes in the
process of change may need to be rebuilt by the database manager to make
sure they agree with the actual data. All indexes on the system have a recovery
option that specifies when an index should be rebuilt if necessary. All SQL
indexes with an attribute of UNIQUE are created with a recovery attribute of *IPL
Catalog Integrity
Catalogs contain information about tables, views, SQL packages, indexes,
procedures, and parameters in a collection. The database manager ensures that
the information in the catalog is accurate at all times. This is accomplished by
preventing end users from explicitly changing any information in the catalog and by
implicitly maintaining the information in the catalog when changes occur to the
tables, views, SQL packages, indexes, procedures, and parameters described in the
catalog.
The integrity of the catalog is maintained whether objects in the collection are
changed by SQL statements, OS/400 CL commands, System/38 Environment CL
commands, System/36 Environment functions, or any other product or utility on an
AS/400 system. For example, deleting a table can be done by running an SQL
DROP statement, issuing an OS/400 DLTF CL command, issuing a System/38
DLTF CL command or entering option 4 on a WRKF or WRKOBJ display.
Regardless of the interface used to delete the table, the database manager will
remove the description of the table from the catalog at the time the delete is
performed. The following is a list of functions and the associated effect on the
catalog:
Table 37. Effect of Various Functions on Catalogs
Function Effect on the Catalog
Add constraint to table Information added to catalog
Remove of constraint from table Related information removed from catalog
Create object into collection Information added to catalog
Delete of object from collection Related information removed from catalog
Restore of object into collection Information added to catalog
Change of object long comment Comment updated in catalog
Change of object label (text) Label updated in catalog
Change of object owner Owner updated in catalog
Move of object from a collection Related information removed from catalog
Move of object into collection Information added to catalog
Rename of object Name of object updated in catalog
To test the program thoroughly, test as many of the paths through the program as
possible. For example:
v Use input data that forces the program to run each of its branches.
v Check the results. For example, if the program updates a row, select the row to
see if it was updated correctly.
v Be sure to test the program error routines. Again, use input data that forces the
program to encounter as many of the anticipated error conditions as possible.
v Test the editing and validation routines your program uses. Give the program as
many different combinations of input data as possible to verify that it correctly
edits or validates that data.
Authorization
Before you can create a table, you must be authorized to create tables and to use
the collection in which the table is to reside. In addition, you must have authority to
create and run the programs you want to test.
If you intend to use existing tables and views (either directly or as the basis for a
view), you must be authorized to access those tables and views.
If you want to create a view, you must be authorized to create views and must have
authorization to each table and view on which the view is based. For more
information on specific authorities required for any specific SQL statement, see the
DB2 UDB for AS/400 SQL Reference book.
Debugging your program with SQL statements is much the same as debugging your
program without SQL statements. However, when SQL statements are run in a job
in the debug mode, the database manager puts messages in the job log about how
each SQL statement ran. This message is an indication of the SQLCODE for the
SQL statement. If the statement ran successfully, the SQLCODE value is zero, and
a completion message is issued. A negative SQLCODE results in a diagnostic
message. A positive SQLCODE results in an informational message.
The message is either a 4-digit code prefixed by SQL or a 5-digit code prefixed by
SQ. For example, an SQLCODE of −204 results in a message of SQL0204, and an
SQLCODE of 30000 results in a message of SQ30000.
SQL will always put messages in the job log for negative SQLCODEs and positive
codes other than +100 regardless of whether it is in debug mode or not.
These messages provide feedback on how a query was run and, in some cases,
indicate the improvements that can be made to help the query run faster.
The messages contain message help that provides information about the cause for
the message, object name references, and possible user responses.
The time at which the message is sent does not necessarily indicate when the
associated function was performed. Some messages are sent altogether at the start
of a query run.
The causes and user responses for the following messages are paraphrased. The
actual message help is more complete and should be used when trying to
determine the meaning and responses for each message.
This message indicates that a temporary access path was created to process the
query. The new access path is created by reading all of the records in the specified
file.
The time required to create an access path on each run of a query can be
significant. Consider creating a logical file (CRTLF) or an SQL index (CREATE
INDEX SQL statement):
v Over the file named in the message help.
v With key fields named in the message help.
v With the ascending or descending sequencing specified in the message help.
Consider creating the logical file with select or omit criteria that either match or
partially match the query’s predicates involving constants. The database manager
will consider using select or omit logical files even though they are not explicitly
specified on the query.
For certain queries, the optimizer may decide to create an access path even when
an existing one can be used. This might occur when a query has an ordering field
as a key field for an access path, and the only record selection specified uses a
different field. If the record selection results in roughly 20% of the records or more
to be returned, then the optimizer may create a new access path to get faster
performance when accessing the data. The new access path minimizes the amount
of data that needs to be read.
This message indicates that a temporary access path was created from the access
path of a keyed file.
Generally, this action should not take a significant amount of time or resource
because only a subset of the data in the file needs to be read. Sometimes even
faster performance can be achieved by creating a logical file or SQL index that
satisfies the access path requirement stated in the message help.
This message can be sent for a variety of reasons. The specific reason is provided
in the message help.
Most of the time, this message is sent when the queried file environment has
changed, making the current access plan obsolete. An example of the file
environment changing is when an access path required by the query no longer
exists on the system.
An access plan contains the instructions for how a query is to be run and lists the
access paths for running the query. If a needed access path is no longer available,
the query is again optimized, and a new access plan is created, replacing the old
one.
The process of again optimizing the query and building a new access plan at
runtime is a function of DB2 UDB for AS/400. It allows a query to be run as
efficiently as possible, using the most current state of the database without user
intervention.
The infrequent appearance of this message is not a cause for action. For example,
this message will be sent when an SQL package is run the first time after a restore,
or anytime the optimizer detects that a change has occurred (such as a new index
was created), that warrants an implicit rebuild. However, excessive rebuilds should
be avoided because extra query processing will occur. Excessive rebuilds may
indicate a possible application design problem or inefficient database management
practices.
If the specified file selects few rows, usually less than 1000 rows, then the row
selection part of the query’s implementation should not take a significant amount of
resource and time. However if the query is taking more time and resources than
can be allowed, consider changing the query so that a temporary file is not
required.
One way to do this is by breaking the query into multiple steps. Consider using an
INSERT statement with a subselect to select only the records that are required into
a physical file, and then use that file’s records for the rest of the query.
A temporary result file was created to contain the intermediate results of the query.
The message help contains the reason why a temporary result file is required.
In some cases, creating a temporary result file provides the fastest way to run a
query. Other queries that have many records to be copied into the temporary result
file can take a significant amount of time. However, if the query is taking more time
and resources than can be allowed, consider changing the query so that a
temporary result file is not required.
This message provides the join position of the specified file when an access path is
used to access the file’s data. Join position pertains to the order in which the files
are joined.
The order in which files are joined can significantly influence the efficiency of a
query. The system processes the join of two files with different numbers of selected
records more efficiently when the file with the smaller number of selected records is
joined to the file with the larger number of selected records. For example, if two
files are being joined, the file with the fewest selected records should be in join
position 1 and the file with the larger number of selected records should be in join
position 2.
If the GROUP BY or ORDER BY clause is specified where all the columns in the
clause are referenced from one of the files in the query, that file becomes the first
file in the final join order. If the referenced file is a large file, the query may be slow.
To improve performance, consider one of the following:
v Add an additional column from a different file to the clause. A temporary result
table is used to allow the system to order the files in the most efficient join order.
v Specify the ALWCPYDTA(*OPTIMIZE) parameter on the ORDER BY clause. The
system orders the files in the most efficient join order.
If the query uses the JOIN clause or refers to a join logical file within the file
specifications, the order in which the files are specified will help determine the join
This message provides the name of the first or primary file of the join when arrival
sequence is used to select records from the file.
See the previous message, CPI4326, for information on join position and join
performance tips.
This message names an existing access path that was used by the query.
The reason the access path was used is given in the message help.
No access path was used to access the data in the specified file. The records were
scanned sequentially in arrival sequence.
The use of an access path may improve the performance of the query if record
selection is specified.
If an access path does not exist, you may want to create one whose key field
matches one of the fields in the record selection. You should only create an access
path if the record selection (WHERE clause) selects 20% or fewer records in the
file.
To force the use of an existing access path, change the ORDER BY clause of the
query to specify the first key field of the access path.
The optimizer stops considering access paths when the time spent optimizing the
query exceeds an internal value that is associated with the estimated time to run
the query and the number of records in the queried files. Generally, the more
records in the files, the greater the number of access paths that will be considered.
When the estimated time to run the query is exceeded, the optimizer uses the
current best method for running the query. Either an access path has been found to
get the best performance, or an access path will have to be created, if necessary.
Exceeding the estimated time to run the query could mean that the optimizer did
not consider the best access path to run the query.
The message help contains a list of access paths that were considered before the
optimizer exceeded the estimated time.
To ensure that an access path is considered for optimization, specify the logical file
associated with the access path as the file to be queried. The optimizer will
consider the access path of the file specified on the query or SQL statement first.
Remember that SQL indexes cannot be queried.
You may want to delete any access paths that are no longer needed.
Two or more SQL subselects were combined by the query optimizer and processed
as a join query. Generally, this method of processing is a good performing option.
The optimizer considered all access paths built over the specified file. Since the
optimizer examined all access paths for the file, it determined the current best
access to the file.
The message help contains a list of the access paths. With each access path a
reason code is added. The reason code explains why the access path was not
used.
This message indicates that the query optimizer was not able to consider the usage
of an index to resolve one or more of the selection specifications of the query. If
there was an index available which otherwise could have been used to limit the
processing of the query to just a few rows, then the performance of this query will
be affected.
CPI4338 — &1 Access path(s) used for bitmap processing of file &2.
The optimizer chooses to use one or more access paths, in conjunction with the
query selection (WHERE clause), to build a bitmap. This resulting bitmap indicates
which records will actually be selected.
Conceptually, the bitmap contains one bit per record in the underlying table.
Corresponding bits for selected records are set to ’1’. All other bits are set to ’0’.
When bitmap processing is used with arrival sequence, either message CPI4327 or
CPI4329 will precede this message. In this case, the bitmap will help to selectively
map only those records from the table that the query selected.
An open data path (ODP) definition is an internal object that is created when a
cursor is opened or when other SQL statements are run. It provides a direct link to
the data so that I/O operations can occur. ODPs are used on OPEN, INSERT,
UPDATE, DELETE, and SELECT INTO statements to perform their respective
operations on the data.
Even though SQL cursors are closed and SQL statements have already been run,
the database manager in many cases will save the associated ODPs of the SQL
operations to reuse them the next time the statement is run. So an SQL CLOSE
statement may close the SQL cursor but leave the ODP available to be used again
the next time the cursor is opened. This can significantly reduce the processing and
response time in running SQL statements.
The ability to reuse ODPs when SQL statements are run repeatedly is an important
consideration in achieving faster performance.
This message is sent when the job’s call stack no longer contains a program that
has run an SQL statement.
Except for ODPs associated with *ENDJOB or *ENDACTGRP cursors, all ODPs are
deleted when all the SQL programs on the call stack complete and the SQL
environment is exited.
This completion process includes closing of cursors, the deletion of ODPs, the
removal of prepared statements, and the release of locks.
Putting an SQL statement that can be run in the first program of an application
keeps the SQL environment active for the duration of that application. This allows
ODPs in other SQL programs to be reused when the programs are repeatedly
called. CLOSQLCSR(*ENDJOB) or CLOSQLCSR(*ENDACTGRP) can also be
specified.
This message indicates that the last time the statement was run or when a CLOSE
statement was run for this cursor, the ODP was not deleted. It will now be used
again. This should be an indication of very efficient use of resources by eliminating
unnecessary OPEN and CLOSE operations.
No ODP was found that could be used again. The first time that the statement is
run or the cursor is opened for a process, an ODP will always have to be created.
However, if this message appears on every run of the statement or open of the
cursor, the tips recommended in “Improving Performance by Retaining Cursor
Positions for Non-ILE Program Calls” on page 467 should be applied to this
application.
For a program that is run only once per job, this message could be normal.
However, if this message appears on every run of the statement or open of the
cursor, then the tips recommended in “Improving Performance by Retaining Cursor
Positions for Non-ILE Program Calls” on page 467 should be applied to this
application.
If the statement is rerun or the cursor is opened again, the ODP should be available
again for use.
The DB2 UDB for AS/400 precompilers allow the creation of the program objects
even when required tables are missing. In this case the binding of the access plan
is done when the program is first run. This message indicates that an access plan
was created and successfully stored in the program object.
SQL will request multiple records from the database manager when running this
statement instead of requesting one record at a time.
The database manager rebuilt the access plan for this statement, but the program
could not be updated with the new access plan. Another job is currently running the
program that has a shared lock on the access plan of the program.
The program cannot be updated with the new access plan until the job can obtain
an exclusive lock on the access plan of the program. The exclusive lock cannot be
obtained until the shared lock is released.
The statement will still run and the new access plan will be used; however, the
access plan will continue to be rebuilt when the statement is run until the program
is updated.
A reusable ODP exists for this statement, but either the job’s library list or override
specifications have changed the query.
When mapping data to host variables, data conversions were required. When these
statements are run in the future, they will be slower than if no data conversions
were required. The statement ran successfully, but performance could be improved
by eliminating the data conversion. For example, a data conversion that would
cause this message to occur would be the mapping of a character string of a
certain length to a host variable character string with a different length. You could
also cause this error by mapping a numeric value to a host variable that is a
different type (decimal to integer). To prevent most conversions, use host variables
that are of identical type and length as the columns that are being fetched.
The attributes of the INSERT or UPDATE values are different than the attributes of
the columns receiving the values. Since the values must be converted, they cannot
be directly moved into the columns. Performance could be improved if the attributes
of the INSERT or UPDATE values matched the attributes of the columns receiving
the values.
| The ability of the governor to predict and stop queries before they are started is
| important because:
| v Operating a long-running query and abnormally ending the query before
| obtaining any results wastes system resources.
| v Some operations within a query cannot be interrupted by the End Request
| (ENDRQS) CL command. The creation of a temporary keyed access path or a
| query using a column function without a GROUP BY clause are two examples of
| these types of queries. It is important to not start these operations if they will take
| longer than the user wants to wait.
| The governor in DB2 UDB for AS/400 is based on the estimated runtime for a
query. If the query’s estimated runtime exceeds the user defined time limit, the
initiation of the query can be stopped.
| The time limit is user-defined and can be specified in one of three ways:
| v Using the Query Time Limit (QRYTIMLMT) parameter on the Change Query
| Attributes (CHGQRYA) CL command.
| v Setting the QQRYTIMLMT system value and allowing each job to use the value
| *SYSVAL on the CHGQRYA CL command.
v Setting the Query Time Limit option in the “Query Options File QAQQINI” on
page 543.
| The governor works in conjunction with the query optimizer. When a user requests
DB2 UDB for AS/400 to run a query, the following occurs:
1. The query access plan is evaluated by the optimizer.
As part of the evaluation, the optimizer predicts or estimates the runtime for the
query. This helps determine the best way to access and retrieve the data for the
query.
2. The estimated runtime is compared against the user-defined query time limit
currently in effect for the job or user session.
3. If the predicted runtime for the query is less than or equal to the query time
limit, the query governor lets the query run without interruption and no message
is sent to the user.
4. If the query time limit is exceeded, inquiry message CPA4259 is sent to the
user. The message states that the estimated query processing time of XX
seconds exceeds the time limit of YY seconds.
Note: A default reply can be established for this message so that the user does
not have the option to reply to the message, and the query request is
always ended.
5. If a default message reply is not used, the user chooses to do one of the
following:
Cancelling a Query
When a query is expected to run longer than the set time limit, the governor issues
inquiry message CPA4259. The user enters a C to cancel the query or an I to
ignore the time limit and let the query run to completion. If the user enters C,
escape message CPF427F is issued to the SQL runtime code. SQL returns
SQLCODE -666.
| You can also set the time limit for a job other than the current job. You do this by
| using the JOB parameter on the CHGQRYA command to specify either a query
| options file library to search (QRYOPTLIB) or a specific QRYTIMLMT for that job.
After the source job runs the CHGQRYA command, effects of the governor on the
target job is not dependent upon the source job. The query time limit remains in
effect for the duration of the job or user session, or until the time limit is changed by
a CHGQRYA command. Under program control, a user could be given different
query time limits depending on the application function being performed, the time of
day, or the amount of system resources available. This provides a significant
amount of flexibility when trying to balance system resources with temporary query
requirements.
The following example will add a reply list element that will cause the default
reply of C to cancel any requests for jobs whose process name is ’QPADEV0011’.
ADDRPYLE SEQNBR(57) MSGID(CPA4259) CMPDTA(QPADEV0011 27) RPY(C)
Additionally, if the query is canceled, the query optimizer evaluates the access plan
and sends the optimizer tuning messages to the joblog. This occurs even if the job
is not in debug mode. The user or a programmer can then review the optimizer
tuning messages in the joblog to see if additional tuning is needed to obtain optimal
query performance. Minimal system resources are used because the actual query
of the data is never actually done. If the files to be queried contain a large number
of records, this represents a significant savings in system resources.
| Be careful when you use this technique for performance testing, because all query
| requests will be stopped before they are run. This is especially important for a
| query that cannot be implemented in a single query step. For these types of
| queries, separate multiple query requests are issued, and then their results are
| accumulated before returning the final results. Stopping the query in one of these
| intermediate steps gives you only the performance information that relates to that
| intermediate step, and not for the entire query.
Chapter 23. Using the DB2 UDB for AS/400 Predictive Query Governor 393
Examples
| To set the query time limit for the current job or user session using query options
| file QAQQINI, specify QRYOPTLIB parameter on the CHGQRYA command to a
| user library where the QAQQINI file exists with the parameter set to
| QUERY_TIME_LIMIT, and the value set to a valid query time limit. For more
| information on setting the query options file, see “Query Options File QAQQINI” on
| page 543.
To set the query time limit for 45 seconds you would use the following CHGQRYA
command:
CHGQRYA JOB(*) QRYTIMLMT(45)
This sets the query time limit at 45 seconds. If the user runs a query with an
estimated runtime equal to or less than 45 seconds, the query runs without
interruption. The time limit remains in effect for the duration of the job or user
session, or until the time limit is changed by the CHGQRYA command.
Assume that the query optimizer estimated the runtime for a query as 135 seconds.
A message would be sent to the user that stated that the estimated runtime of 135
seconds exceeds the query time limit of 45 seconds.
| To set or change the query time limit for a job other than your current job, the
| CHGQRYA command is run using the JOB parameter. To set the query time limit to
| 45 seconds for job 123456/USERNAME/JOBNAME you would use the following
| CHGQRYA command:
| CHGQRYA JOB(123456/USERNAME/JOBNAME) QRYTIMLMT(45)
| This sets the query time limit at 45 seconds for job 123456/USERNAME/JOBNAME.
| If job 123456/USERNAME/JOBNAME tries to run a query with an estimated runtime
| equal to or less than 45 seconds the query runs without interruption. If the
| estimated runtime for the query is greater than 45 seconds, for example 50
| seconds, a message would be sent to the user stating that the estimated runtime of
| 50 seconds exceeds the query time limit of 45 seconds. The time limit remains in
| effect for the duration of job 123456/USERNAME/JOBNAME, or until the time limit
| for job 123456/USERNAME/JOBNAME is changed by the CHGQRYA command.
| To set or change the query time limit to the QQRYTIMLMT system value, use the
| following CHGQRYA command:
| CHGQRYA QRYTIMLMT(*SYSVAL)
| The QQRYTIMLMT system value is used for duration of the job or user session, or
| until the time limit is changed by the CHGQRYA command. This is the default
| behavior for the CHGQRYA command.
| If one understands how DB2 UDB for AS/400 processes queries, it is easier to
understand the performance impacts of the guidelines discussed in this chapter.
There are two major components of DB2 UDB for AS/400:
1. Data management methods
These methods are the algorithms used to retrieve data from the disk. The
methods include index usage and row selection techniques. In addition, parallel
access methods are available with the DB2 UDB Symmetric Multiprocessing
operating system feature.
2. Query optimizer
The query optimizer identifies the valid techniques which could be used to
implement the query and selects the most efficient technique.
Access Path
An access path is the path used to locate data specified in a query. An access path
can be indexed, sequential, or a combination of both.
Columns that are good candidates for creating keyed sequence access paths are:
| v Those frequently referenced in row selection predicates.
| v Those frequently referenced in grouping or ordering.
v Those used to join tables (see “Join Optimization” on page 426).
You create encoded vector access paths by using the SQL CREATE INDEX
statement. For more information about accelerating your queries with encoded
vector indexes , go to the DB2 for AS/400 web pages.
For a further description of access paths, refer to the Data Management book.
The query optimization process chooses the most efficient access method for each
query and keeps this information in the access plan. The type of access is
dependent on the number of rows, the expected number of page faults 17, and
other criteria.
The possible methods the optimizer can use to retrieve data include:
v Dataspace scan method (a dataspace is an internal object that contains the data
in a table) (page 399)
v Parallel pre-fetch method (page 401)
v Key selection method (page 404)
v Key positioning method (page 406)
v Parallel table or index pre-load (page 413)
v Index-from-index method (page 414)
v Index only access method (page 412)
v Hashing method (page 415)
v Bitmap processing method (page 416)
The DB2 UDB Symmetric Multiprocessing feature provides the optimizer with
additional methods for retrieving data that include parallel processing.
17. An interrupt that occurs when a program refers to a 4K-byte page that is not in main storage.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 397
┌───────────────────┐
│ QUERY │
└─────────┬─────────┘
│
┌────────────┬─────┴─────┬────────────┐
│ │ │ │
│ │ │ │
ø ø ø ø
┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐
│ │ │ │ │ │ │ │
│ CPU │ │ CPU │ │ CPU │ │ CPU │
│ │ │ │ │ │ │ │
└───┬───┘ └───┬───┘ └───┬───┘ └───┬───┘
│ │ │ │
└────────────┴─────┬─────┴────────────┘
│
┌─────────────┴──────────────┐
│ │
│ SHARED MEMORY │
│ │
└────────────────────────────┘
The following methods are available to the optimizer once the DB2 UDB Symmetric
Multiprocessing feature has been installed on your system:
v Parallel data space scan method (page 402)
v Parallel key selection method (page 405)
v Parallel key positioning method (page 410)
v Parallel index only access method (parallel and non-parallel) (page 412)
v Parallel hashing method (parallel and non-parallel) (page “Hashing Access
Method” on page 415)
v Parallel bitmap processing method (page 416)
Ordering
An ORDER BY clause (or OPNQRYF KEYFLD parameter) must be specified to
guarantee a particular ordering of the results. Before parallel access methods were
available, the database manager processed table rows (and keyed sequences) in a
sequential manner. This caused the sequencing of the results to be somewhat
predictable even though an ordering was not included in the original query request.
Because parallel methods cause blocks of table rows and key values to be
processed concurrently, the ordering of the retrieved results becomes more random
and unpredictable. An ORDER BY clause is the only way to guarantee the specific
sequencing of the results. However, an ordering request should only be specified
when absolutely required, because the sorting of the results can increase both CPU
utilization and response time.
A set of database system tasks are created at system startup for use by the
database manager. The database manager uses the tasks to process and retrieve
data from different disk devices. Since these tasks can be run on multiple
processors simultaneously, the elapsed time of a query can be reduced. Even
though much of the I/O and CPU processing of a parallel query is done by the
tasks, the accounting of the I/O and CPU resources used are transferred to the
application job. The summarized I/O and CPU resources for this type of application
continue to be accurately displayed by the Work with Active Jobs (WRKACTJOB)
command.
Even though DB2 UDB for AS/400 spreads data across disk devices within an ASP,
sometimes the allocation of the data extents (contiguous sets of data) might not be
spread evenly. This occurs when there is uneven allocation of space on the
devices, or when a new device is added to the ASP. The allocation of the data
space may be spread again by saving, deleting, and then restoring the table.
This selection method is very good when a large percentage of the rows are to be
selected. A large percentage is generally 20% or more.
| Dataspace scan processing can be adversely affected when rows are selected from
| a table that contains deleted rows. This is because the delete operation only marks
| rows as deleted. For dataspace scan processing, the database manager reads all
| of the deleted rows, even though none of the deleted rows are ever selected. You
| should use the Reorganize Physical File Member (RGZPFM) CL command to
| eliminate deleted rows. Specifying REUSEDLT(*YES) on the physical file can also
| reuse the deleted record space. SQL tables are created with REUSEDLT(*YES).
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 399
Dataspace scan processing is not very efficient when a small percentage of rows in
the table will be selected. Because all rows in the table are examined, this leads to
unnecessary use of I/O and processing unit resources.
The Licensed Internal Code can use one of two algorithms for selection when a
dataspace scan is processed, intermediate buffer selection and dataspace element
selection.
END
No action is necessary for queries of this type to make use of the dataspace scan
method. Any query interface can utilize this improvement. However, the following
guidelines determine whether a selection predicate can be implemented as
dataspace selection:
v Neither operand of the predicate can be of any kind of a derived value, function,
substring, concatenation, or numeric expression.
v When both operands of a selection predicate are numeric columns, both columns
must have the same type, scale, and precision; otherwise, one operand is
mapped into a derived value. For example, a DECIMAL(3,1) must only be
compared against another DECIMAL(3,1) column.
v When one operand of a selection predicate is a numeric column and the other is
a literal or host variable, then the types must be the same and the precision and
scale of the literal/host variable must be less than or equal to that of the column.
v Selection predicates involving packed decimal or numeric types of columns can
only be done if the table was created by the SQL CREATE TABLE statement.
v A varying length character column cannot be referenced in the selection
predicate.
v When one operand of a selection predicate is a character column and the other
is a literal or host variable, then the length of the host variable cannot be greater
than that of the column.
v Comparison of character column data must not require CCSID or key board shift
translation.
| This method has the same characteristics as the dataspace scan method; except
| that the I/O processing is done in parallel. This is accomplished by starting multiple
| input streams for the table to pre-fetch the data. This method is most effective when
| the following are true:
| v The data is spread across multiple disk devices.
| v The query is not CPU-processing-intensive.
| v There is an ample amount of main storage available to hold the data collected
| from every input stream.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 401
| As mentioned previously, DB2 UDB for AS/400 automatically spreads the data
| across the disk devices without user intervention, allowing the database manager to
| pre-fetch table data in parallel. The database manager uses tasks to retrieve data
| from different disk devices. Usually the request is for an entire extent (contiguous
| set of data). This improves performance because the disk device can use smooth
| sequential access to the data. Because of this optimization, parallel prefetch can
| pre-load data to active memory faster than the SETOBJACC CL command.
Even though DB2 UDB for AS/400 spreads data across disk devices within an ASP,
sometimes the allocation of the dataspace extents may not be spread evenly. This
occurs when there is uneven allocation of space on the devices or a new device is
added to the ASP. The allocation of the dataspace can be respread by saving,
deleting, and restoring the table.
The query optimizer selects the candidate queries which can take advantage of this
type of implementation. The optimizer selects the candidates by estimating the CPU
time required to process the query and comparing the estimate to the amount of
time required for input processing. When the estimated input processing time
exceeds the CPU time, the query optimizer indicates that the query may be
implemented with parallel I/O.
| Parallel pre-fetch requires that I/O parallel processing must be enabled either by the
| system value QQRYDEGREE, the query option file, or by the DEGREE parameter
| on the Change Query Attributes (CHGQRYA) command. See “Controlling Parallel
| Processing” on page 473 for information on how to control parallel processing.
| Because queries being processed with parallel pre-fetch aggressively utilize main
| store and disk I/O resources, the number of queries that use parallel pre-fetch
| should be limited and controlled. Parallel prefetch utilizes multiple disk arms, but it
| does little utilization of multiple CPUs for any given query. Parallel prefetch I/O will
| use I/O resources intensely. Allowing a parallel prefetch query on a system with an
| overcommitted I/O subsystem may intensify the over-commitment problem.
You should run the job in a shared storage pool with the *CALC paging option
because this will cause more efficient use of active memory. DB2 UDB for AS/400
uses the automated system tuner to determine how much memory this process is
allowed to use. At run-time, the Licensed Internal Code will allow parallel pre-fetch
to be used only if the memory statistics indicate that it will not over-commit the
memory resources. For more information on the paging option see the “Automatic
System Tuning” section of the Work Management book.
Parallel pre-fetch requires that enough main storage be available to cache the data
being retrieved by the multiple input streams. For large files, the typical extent size
is 1 megabyte. This means that 2 megabytes of memory must be available in order
to use 2 input streams concurrently. Increasing the amount of available memory in
the pool allows more input streams to be used. If there is plenty of available
memory, the entire dataspace for the table may be loaded into active memory when
the query is opened.
| As mentioned previously, DB2 UDB for AS/400 automatically spreads the data
| across the disk devices without user intervention, allowing the database manager to
| pre-fetch table data in parallel.
The query optimizer selects the candidate queries that can take advantage of this
type of implementation. The optimizer selects the candidates by estimating the CPU
time required to process the query and comparing the estimate to the amount of
time required for input processing. The optimizer reduces its estimated elapsed time
for data space scan based on the number of tasks it calculates should be used. It
calculates the number of tasks based on the number of processors in the system,
the amount of memory available in the job’s pool, and the current value of the
DEGREE query attribute. If the parallel data space scan is the fastest access
method, it is then chosen.
Parallel data space scan requires that SMP parallel processing must be enabled
either by the system value QQRYDEGREE, the query option file, or by the
DEGREE parameter on the Change Query Attributes (CHGQRYA) command. See
“Controlling Parallel Processing” on page 473 for information on how to control
parallel processing.
Parallel data space scan cannot be used for queries that require any of the
following:
v Specification of the *ALL commitment control level.
v Nested loop join implementation. See “Nested Loop Join Implementation” on
page 426.
| v Backward scrolling. For example, parallel data space scan cannot normally be
| used for queries defined by the Open Query File (OPNQRYF) command, which
| specify ALWCPYDTA(*YES) or ALWCPYDTA(*NO), because the application
| might attempt to position to the last record and retrieve previous records.
| SQL-defined queries that are not defined as scrollable can use this method.
| Parallel data space scan can be used during the creation of a temporary result,
| such as a sort or hash operation, no matter what interface was used to define
| the query. OPNQRYF can be defined as not scrollable by specifying the
| *OPTIMIZE parameter value for the ALWCPYDTA parameter, which enbles the
| usage of most of the parallel access methods.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 403
v Restoration of the cursor position. For instance, a query requiring that the cursor
position be restored as the result of the SQL ROLLBACK HOLD statement or the
ROLLBACK CL command. SQL applications using a commitment control level
other than *NONE should specify *ALLREAD as the value for precompiler
parameter ALWBLK to allow this method to be used.
v Update or delete capability.
You should run the job in a shared storage pool with the *CALC paging option, as
this will cause more efficient use of active memory. For more information on the
paging option see the “Automatic System Tuning” section of the Work Management
book.
Parallel data space scan requires active memory to buffer the data being retrieved
and to separate result buffers for each task. A typical total amount of memory
needed for each task is about 2 megabytes. For example, about 8 megabytes of
memory must be available in order to use 4 parallel data space scan tasks
concurrently. Increasing the amount of available memory in the pool allows more
input streams to be used. Queries that access tables with large varying length
character columns, or queries that generate result values that are larger than the
actual record length of the table might require more memory for each task.
The performance of parallel data space scan can be severely limited if numerous
record locking conflicts or data mapping errors occur.
The key selection access method can be very expensive if the search condition
applies to a large number of rows because:
v The whole index is processed.
v For every key selected from the index, a random I/O to the dataspace occurs.
Normally, the optimizer would choose to use dataspace scan processing when the
search condition applies to a large number of rows. The optimizer only chooses the
key selection method if less than 20% of the keys are selected or if an operation
forces the use of an index. Options that might force the use of an index include:
v Ordering
v Grouping
v Joining
| In these cases, the optimizer may choose to create a temporary index rather than
| use an existing index. When the optimizer creates a temporary index, it uses a 32K
| page size. An index created using a CREATE INDEX statement or the CRTLF
| command normally uses only a 4K page size. The optimizer also processes as
| much of the selection as possible while building the temporary index. Nearly all
| temporary indexes built by the optimizer are select/omit or sparse indexes. Finally,
| the optimizer can use multiple parallel tasks when creating the index. The page size
| difference, corresponding performance improvement from swapping in fewer pages,
| If key selection access method is used because the query specified ordering (an
| index was required) the query performance might be improved by using the
| following parameters to allow the ordering to be done with the query sort.
| v For SQL, the following combinations of precompiler parameters:
| – ALWCPYDTA(*OPTIMIZE), ALWBLK(*ALLREAD), and COMMIT(*CHG or
| *CS)
| – ALWCPYDTA(*OPTIMIZE) and COMMIT(*NONE)
| v For OPNQRYF, the following parameters:
| – *ALWCPYDTA(*OPTIMIZE) and COMMIT(*NO)
| – ALWCPYDTA(*OPTIMIZE) and COMMIT(*YES) and the commitment control
| level is started with a commit level of *NONE, *CHG, or *CS
| When a query specifies a select/omit index and the optimizer decides to build a
| temporary index, all of the selection from the select/omit index is put into the
| temporary index after any applicable selection from the query.
The following example illustrates a query where the optimizer could choose the key
selection method:
CREATE INDEX X1
ON EMPLOYEE(LASTNAME,WORKDEPT)
OPNQRYF example:
OPNQRYF FILE((EMPLOYEE))
QRYSLT('WORKDEPT *EQ ''E01''')
If the optimizer chooses to run this query in parallel with a degree of four, the
following might be the logical key partitions that get processed concurrently:
LASTNAME values LASTNAME values
leading character leading character
partition start partition end
'A' 'F'
'G' 'L'
'M' 'S'
'T' 'Z'
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 405
If there were fewer keys in the first and second partition, processing of those key
values would complete sooner than the third and fourth partitions. After the first two
partitions are finished, the remaining key values in the last two might be further
split. The following shows the four partitions that might be processed after the first
and second partition are finished and the splits have occurred:
LASTNAME values LASTNAME values
leading character leading character
partition start partition end
'O' 'P'
'Q' 'S'
'V' 'W'
'X' 'Z'
Parallel key selection cannot be used for queries that require any of the following:
v Specification of the *ALL commitment control level.
v Nested loop join implementation. See “Nested Loop Join Implementation” on
page 426.
| v Backward scrolling. For example, parallel key selection cannot be used for
| queries defined by the Open Query File (OPNQRYF) command which specify
| ALWCPYDTA(*YES) or ALWCPYDTA(*NO), because the application might
| attempt to position to the last record and retrieve previous records. SQL defined
| queries that are not defined as scrollable can use this method. Parallel key
| selection can be used during the creation of a temporary result, such as a sort or
| hash operation, no matter what interface was used to define the query.
| v Restoration of the cursor position (for instance, a query requiring that the cursor
| position be restored as the result of the SQL ROLLBACK HOLD statement or the
| ROLLBACK CL command). OPNQRYF can be defined as not scrollable by
| specifying the *OPTIMIZE parameter value for the ALWCPYDTA parameter,
| which enables the usage of most of the parallel access methods. SQL
| applications using a commitment control level other than *NONE should specify
| *ALLREAD as the value for precompiler parameter ALWBLK to allow this method
| to be used.
v Update or delete capabilitiy.
You should run the job in a shared pool with *CALC paging option as this will cause
more efficient use of active memory. For more information on the paging option see
the “Automatic System Tuning” section of the Work Management book.
| Parallel key selection requires that SMP parallel processing be enabled either by
| the system value QQRYDEGREE, the query options file, or by the DEGREE
| parameter on the Change Query Attributes (CHGQRYA) command. See “Controlling
| Parallel Processing” on page 473 for information on how to control parallel
| processing.
The key positioning method is most efficient when a small percentage of rows are
to be selected (less than approximately 20%). If more than approximately 20% of
the rows are to be selected, the optimizer generally chooses to:
v Use dataspace scan processing (if index is not required)
v Use key selection (if an index is required)
v Use query sort routine (if conditions apply)
For queries that do not require an index (no ordering, grouping, or join operations),
the optimizer tries to find an existing index to use for key positioning. If no existing
index can be found, the optimizer stops trying to use keyed access to the data
because it is faster to use dataspace scan processing than it is to build an index
and then perform key positioning.
The following example illustrates a query where the optimizer could choose the key
positioning method:
CREATE INDEX X1 ON EMPLOYEE(WORKDEPT)
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE))
| QRYSLT('WORKDEPT *EQ ''E01''')
| In this example, the database support uses X1 to position to the first index entry
with the WORKDEPT value equal to ’E01’. For each key equal to ’E01’, it randomly
accesses the dataspace 18 and selects the row. The query ends when the key
selection moves beyond the key value of E01.
Note that for this example all index entries processed and rows retrieved meet the
selection criteria. If additional selection is added that cannot be performed through
key positioning (such as selection columns which do not match the first key
columns of an index over multiple columns) the optimizer uses key selection to
perform as much additional selection as possible. Any remaining selection is
performed at the dataspace level.
The key positioning access method has additional processing capabilities. One such
capability is to perform range selection across several values. For example:
CREATE INDEX X1 EMPLOYEE(WORKDEPT)
18. random accessing occurs because the keys may not be in the same sequence as the rows in the dataspace
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 407
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE))
| QRYSLT('WORKDEPT *EQ %RANGE(''E01'' ''E11'')')
| In the previous example, the database support positions to the first index entry
equal to value ’E01’ and rows are processed until the last index entry for ’E11’ is
processed.
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE))
| QRYSLT('WORKDEPT *EQ %RANGE(''E01'' ''E11'')
| *OR WORKDEPT *EQ %RANGE(''A00'' ''B01'')')
| In the previous example, the positioning and processing technique is used twice,
once for each range of values.
All of the key positioning examples have so far only used one key, the left-most key,
of the index. Key positioning also handles more than one key (although the keys
must be contiguous to the left-most key).
CREATE INDEX X2
ON EMPLOYEE(WORKDEPT,LASTNAME,FIRSTNME)
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE))
| QRYSLT('WORKDEPT *EQ ''D11''
| *AND FIRSTNME *EQ ''DAVID''')
| Because the two selection keys (WORKDEPT and FIRSTNME) are not contiguous,
there is no multiple key position support for this example. Therefore, only the
WORKDEPT = ’D11’ part of the selection can be applied against the index (single
key positioning). While this may be acceptable, it means that the processing of rows
By creating the following index, X3, the above example query would run using
multiple key positioning.
CREATE INDEX X3
ON EMPLOYEE(WORKDEPT, FIRSTNME, LASTNAME)
Multiple key positioning support can apply both pieces of selection as key
positioning. This improves performance considerably. A starting value is built by
concatenating the two selection values into ’D11DAVID’ and selection is positioned
to the index entry whose left-most two keys have that value.
| This next example shows a more interesting use of multiple key positioning.
|
| CREATE INDEX X3 ON EMPLOYEE(WORKDEPT,FIRSTNME)
|
| DECLARE BROWSE2 CURSOR FOR
| SELECT * FROM EMPLOYEE
| WHERE WORKDEPT = 'D11'
| AND FIRSTNME IN ('DAVID','BRUCE','WILLIAM')
| OPTIMIZE FOR 99999 ROWS
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE))
| QRYSLT('WORKDEPT *EQ ''D11''
| *AND FIRSTNME *EQ %VALUES(''DAVID'' ''BRUCE''
| ''WILLIAM'')')
| The query optimizer analyzes the WHERE clause and rewrites the clause into an
equivalent form:
DECLARE BROWSE2 CURSOR FOR
SELECT * FROM EMPLOYEE
WHERE (WORKDEPT = 'D11' AND FIRSTNME = 'DAVID')
OR (WORKDEPT = 'D11' AND FIRSTNME = 'BRUCE')
OR (WORKDEPT = 'D11' AND FIRSTNME = 'WILLIAM')
OPTIMIZE FOR 99999 ROWS
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE))
| QRYSLT('(WORKDEPT *EQ ''D11'' *AND FIRSTNME *EQ
| ''DAVID'')
| *OR (WORKDEPT *EQ ''D11'' *AND FIRSTNME *EQ ''BRUCE'')
| *OR (WORKDEPT *EQ ''D11'' *AND FIRSTNME *EQ ''WILLIAM'')')
| In the rewritten form of the query there are actually 3 separate ranges of key values
for the concatenated values of WORKDEPT and FIRSTNME:
Index X3 Start value Index X3 Stop value
'D11DAVID' 'D11DAVID'
'D11BRUCE' 'D11BRUCE'
'D11WILLIAM' 'D11WILLIAM'
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 409
Key positioning is performed over each range, significantly reducing the number of
keys selected to just 3. All of the selection can be accomplished through key
positioning.
The complexity of this range analysis can be taken to a further degree in the
following example:
DECLARE BROWSE2 CURSOR FOR
SELECT * FROM EMPLOYEE
WHERE (WORKDEPT = 'D11'
AND FIRSTNME IN ('DAVID','BRUCE','WILLIAM'))
OR (WORKDEPT = 'E11'
AND FIRSTNME IN ('PHILIP','MAUDE'))
OR (FIRSTNME BETWEEN 'CHRISTINE' AND 'DELORES'
AND WORKDEPT IN ('A00','C01'))
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE))
| QRYSLT('(WORKDEPT *EQ ''D11''
| *AND FIRSTNME *EQ %VALUES(''DAVID'' ''BRUCE'' ''WILLIAM''))
| *OR (WORKDEPT *EQ ''E11''
| *AND FIRSTNME *EQ %VALUES(''PHILIP'' ''MAUDE''))
| *OR (FIRSTNME *EQ %RANGE(''CHRISTINE'' ''DELORES'')
| *AND WORKDEPT *EQ %VALUES(''A00'' ''C01''))')
| The query optimizer analyzes the WHERE clause and rewrites the clause into an
equivalent form:
DECLARE BROWSE2 CURSOR FOR
SELECT * FROM EMPLOYEE
WHERE (WORKDEPT = 'D11' AND FIRSTNME
= 'DAVID')
OR (WORKDEPT = 'D11' AND FIRSTNME
= 'BRUCE')
OR (WORKDEPT = 'D11' AND FIRSTNME
= 'WILLIAM')
OR (WORKDEPT = 'E11' AND FIRSTNME
= 'PHILIP')
OR (WORKDEPT = 'E11' AND FIRSTNME
= 'MAUDE')
OR (WORKDEPT = 'A00' AND FIRSTNME
BETWEEN
'CHRISTINE' AND 'DELORES')
OR (WORKDEPT = 'C01' AND FIRSTNME BETWEEN
'CHRISTINE' AND 'DELORES')
OPTIMIZE FOR 99999 ROWS
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE))
| QRYSLT('(WORKDEPT *EQ ''D11'' *AND FIRSTNME *EQ
| ''DAVID'')
| *OR (WORKDEPT *EQ ''D11'' *AND FIRSTNME *EQ ''BRUCE'')
| *OR (WORKDEPT *EQ ''D11'' *AND FIRSTNME *EQ
| ''WILLIAM'')
| *OR (WORKDEPT *EQ ''E11'' *AND FIRSTNME *EQ ''PHILIP'')
| *OR (WORKDEPT *EQ ''E11'' *AND FIRSTNME *EQ ''MAUDE'')
| *OR (WORKDEPT *EQ ''A00'' *AND
| FIRSTNME *EQ %RANGE(''CHRISTINE'' ''DELORES''))
| *OR (WORKDEPT *EQ ''C01'' *AND
| FIRSTNME *EQ %RANGE(''CHRISTINE'' ''DELORES''))')
| In the query there are actually 7 separate ranges of key values for the
concatenated values of WORKDEPT and FIRSTNME:
Index X3 Start value Index X3 Stop value
'D11DAVID' 'D11DAVID'
'D11BRUCE' 'D11BRUCE'
'D11WILLIAM' 'D11WILLIAM'
'E11MAUDE' 'E11MAUDE'
Key positioning is performed over each range. Only those rows whose key values
fall within one of the ranges are returned. All of the selection can be accomplished
through key positioning. This significantly improves the performance of this query.
Consider the following example if the SQL statement is run using parallel degree of
four.
DECLARE BROWSE2 CURSOR FOR
SELECT * FROM EMPLOYEE
WHERE (WORKDEPT = 'D11' AND FIRSTNME
= 'DAVID')
OR (WORKDEPT = 'D11' AND FIRSTNME
= 'BRUCE')
OR (WORKDEPT = 'D11' AND FIRSTNME
= 'WILLIAM')
OR (WORKDEPT = 'E11' AND FIRSTNME
= 'PHILIP')
OR (WORKDEPT = 'E11' AND FIRSTNME
= 'MAUDE')
OR (WORKDEPT = 'A00' AND FIRSTNME
BETWEEN
'CHRISTINE' AND 'DELORES')
OR (WORKDEPT = 'C01' AND FIRSTNME BETWEEN
'CHRISTINE' AND 'DELORES')
OPTIMIZE FOR 99999 ROWS
OPNQRYF example:
OPNQRYF FILE((EMPLOYEE))
QRYSLT('(WORKDEPT *EQ ''D11'' *AND FIRSTNME *EQ ''DAVID'')
*OR (WORKDEPT *EQ ''D11'' *AND FIRSTNME *EQ ''BRUCE'')
*OR (WORKDEPT *EQ ''D11'' *AND FIRSTNME *EQ ''WILLIAM'')
*OR (WORKDEPT *EQ ''E11'' *AND FIRSTNME *EQ ''PHILIP'')
*OR (WORKDEPT *EQ ''E11'' *AND FIRSTNME *EQ ''MAUDE'')
*OR (WORKDEPT *EQ ''A00'' *AND
FIRSTNME*EQ %RANGE(''CHRISTINE'' ''DELORES''))
*OR (WORKDEPT *EQ ''C01'' *AND
FIRSTNME *EQ %RANGE(''CHRISTINE'' ''DELORES''))')
The key ranges the database manager starts with are as follows:
Index X3 Start value Index X3 Stop value
Range 1 'D11DAVID' 'D11DAVID'
Range 2 'D11BRUCE' 'D11BRUCE'
Range 3 'D11WILLIAM' 'D11WILLIAM'
Range 4 'E11MAUDE' 'E11MAUDE'
Range 5 'E11PHILIP' 'E11PHILIP'
Range 6 'A00CHRISTINE' 'A00DELORES'
Range 7 'C01CHRISTINE' 'C01DELORES'
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 411
Ranges 1 to 4 are processed concurrently in separate tasks. As soon as one of
those four completes, range 5 is started. When another range completes, range 6 is
started, and so on. When one of the four ranges in progress completes and there
are no more new ones in the list to start, the remaining work left in one of the other
key ranges is split and each half is processed separately.
Parallel key positioning cannot be used for queries that require any of the following:
v Specification of the *ALL commitment control level.
v Nested loop join implementation. See “Nested Loop Join Implementation” on
page 426.
| v Backward scrolling. For example, parallel key positioning cannot be used for
| queries defined by the Open Query File (OPNQRYF) command, which specify
| ALWCPYDTA(*YES) or ALWCPYDTA(*NO), because the application might
| attempt to position to the last record and retrieve previous records. SQL-defined
| queries that are not defined as scrollable can use this method. Parallel key
| positioning can be used during the creation of a temporary result, such as a sort
| or hash operation, no matter what interface was used to define the query.
| OPNQRYF can be defined as not scrollable by specifying the *OPTIMIZE
| parameter value for the ALWCPYDTA parameter, which enbles the usage of most
| of the parallel access methods.
v Restoration of the cursor position. For instance, a query requiring that the cursor
position be restored as the result of the SQL ROLLBACK HOLD statement or the
ROLLBACK CL command. SQL applications using a commitment control level
other than *NONE should specify *ALLREAD as the value for precompiler
parameter ALWBLK to allow this method to be used.
v Update or delete capability.
You should run the job in a shared pool with the *CALC paging option as this will
cause more efficient use of active memory. For more information on the paging
option see the ″Automatic System Tuning″ section of Work Management book.
Parallel key selection requires that SMP parallel processing be enabled either by
the system value QQRYDEGREE, by the query options file PARALLEL_DEGREE
option, or by the DEGREE parameter on the Change Query Attributes (CHGQRYA)
command. See “Controlling Parallel Processing” on page 473 for information on
how to control parallel processing.
However, all of the data is extracted from the index rather than performing a
random I/O to the data space. The index entry is then used as the input for any
derivation or result mapping that might have been specified on the query. The
optimizer chooses this method when:
v All of the columns that are referenced within the query can be found within a
permanent index or within the key fields of a temporary index that the optimizer
has decided to create.
The following example illustrates a query where the optimizer could choose to
perform index only access.
CREATE INDEX X2
ON EMPLOYEE(WORKDEPT,LASTNAME,FIRSTNME)
OPNQRYF example:
OPNQRYF FILE((EMPLOYEE))
QRYSLT('WORKDEPT *EQ ''D11''')
In this example, the database manager uses X2 to position to the index entries for
WORKDEPT=’D11’ and then extracts the value for the column FIRSTNME from those
entries.
Note that the index key fields do not have to be contiguous to the leftmost key of
the index for index only access to be performed. Any key field in the index can be
used to provide data for the index only query. The index is used simply as the
source for the data so the database manager can finish processing the query after
the selection has been completed.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 413
The parallel pre-load method can be used with any of the other data access
methods. The pre-load is started when the query is opened and control is returned
to the application before the pre-load is finished. The application continues fetching
rows using the other database access methods without any knowledge of pre-load.
The result is an index containing entries in the required key sequence for rows that
match the selection criteria.
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE))
| QRYSLT('WORKDEPT *EQ ''D11''')
| KEYFLD((LASTNAME))
| For this example, a temporary select/omit index is created with the primary key field
LASTNAME. It contains index entries for only those rows where WORKDEPT =
’D11’. If WORKDEPT = ’D11’, less than approximately 20% of the rows are
selected. The messages created by the PRTSQLINF CL command to describe this
query in an SQL program are as follows:
SQL4012 Access path created from keyed file X1 for file 1.
SQL4011 Key row positioning used on file 1.
| Rather than using the index-from-index access method, you can use the query sort
| routine:
| v For SQL (see “Improving Performance by Using the ALWCPYDTA Parameter” on
| page 465) specify either of the following precompile options:
| – ALWCPYDTA(*OPTIMIZE), ALWBLK(*ALLREAD), and COMMIT(*CHG or
| *CS)
The hashing access method can complement keyed sequence access paths or
serve as an alternative. For each selected row, the specified grouping or join value
in the row is run through a hashing function. The computed hash value is then used
to search a specific partition of the hash table. A hash table is similar to a
temporary work table, but has a different structure that is logically partitioned based
on the specified query. If the row’s source value is not found in the table, then this
marks the first time that this source value has been encountered in the database
table. A new hash table entry is initialized with this first-time value and additional
processing is performed based on the query operation. If the row’s source value is
found in the table, the hash table entry for this value is retrieved and additional
query processing is performed based on the requested operation (such as grouping
or joining). The hash method can only correlate (or group) identical values; the hash
table rows are not guaranteed to be sorted in ascending or descending order.
The hashing method can be used only when the ALWCPYDTA(*OPTIMIZE) option
has been specified unless a temporary result is required, since the hash table built
by the database manager is a temporary copy of the selected rows.
The hashing algorithm allows the database manager to build a hash table that is
well-balanced, given that the source data is random and distributed. The hash table
itself is partitioned based on the requested query operation and the number of
source values being processed. The hashing algorithm then ensures that the new
hash table entries are distributed evenly across the hash table partitions. This
balanced distribution is necessary to guarantee that scans in different partitions of
the hash tables are processing the same number of entries. If one hash table
partition contains a majority of the hash table entries, then scans of that partition
are going to have to examine the majority of the entries in the hash table. This is
not very efficient.
Since the hash method typically processes the rows in a table sequentially, the
database manager can easily predict the sequence of memory pages from the
database table needed for query processing. This is similar to the advantages of
the dataspace scan access method. The predictability allows the database manager
to schedule asynchronous I/O of the table pages into main storage (also known as
pre-fetching). Pre-fetching enables very efficient I/O operations for the hash method
leading to improved query performance.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 415
In contrast, query processing with a keyed sequence access method causes a
random I/O to the database table for every key value examined. The I/O operations
are random since the keyed-order of the data in the index does not match the
physical order of the rows in the database table. Random I/O can reduce query
performance because it leads to unnecessary use of I/O and processor unit
resources.
A keyed sequence access path can also be used by the hash method to process
the table rows in keyed order. The keyed access path can significantly reduce the
number of table rows that the hash method has to process. This can offset the
random I/O costs associated with keyed sequence access paths.
The hash table creation and population takes place before the query is opened.
Once the hash table has been completely populated with the specified database
records, the hash table is used by the database manager to start returning the
results of the queries. Additional processing might be required on the resulting hash
table rows, depending on the requested query operations.
Since blocks of table rows are automatically spread, the hashing access method
can also be performed in parallel so that several groups of records are being
hashed at the same time. This shortens the amount of time it takes to hash all the
rows in the database table.
If the DB2 SMP feature is installed, the hashing methods can be performed in
parallel.
In this method, the optimizer chooses one or more keyed sequence access paths to
be used to aid in selecting records from the data space. Temporary bitmaps are
allocated (and initialized), one for each index. Each bitmap contains one bit for each
record in the underlying data space. For each index, key positioning and key
selection methods are used to apply selection criteria.
For each index entry selected, the bit associated with that record is set to ’1’ (i.e.
turned on). The data space is not accessed. When the processing of the index is
complete, the bitmap contains the information on which records are to be selected
from the underlying data space. This process is repeated for each index. If two or
more indexes are used, the temporary bitmaps are logically ANDed and ORed
together to obtain one resulting bitmap. Once the resulting bitmap is built, it is used
to avoid mapping in records from the data space unless they are selected by the
query.
It is important to note that the indexes used to generate the bitmaps are not actually
used to access the selected records. For this reason, they are called tertiary
indexes. Conversely, indexes used to access the final records are called primary
indexes. Primary indexes are used for ordering, grouping, joins, and for selection
when no bitmap is used.
If the bitmap is used in conjunction with the data space scan method, the bitmap
initiates a skip-sequential processing. The data space scan (and parallel data space
scan) uses the bitmap to ″skip over″ non-selected records. This has several
advantages:
v No CPU processing is used processing non-selected records.
v I/O is minimized and the memory is not filled with the contents of the entire data
space.
The following example illustrates a query where the query optimizer chooses the
bitmap processing method in conjunction with the dataspace scan:
CREATE INDEX IX1 ON EMPLOYEE (WORKDEPT)
CREATE INDEX IX2 ON EMPLOYEE (SALARY)
OPNQRYF example:
OPNQRYF FILE((EMPLOYEE))
QRYSLT('WORKDEPT *EQ ''E01'' *OR SALARY > 50000')
In this example, both indexes IX1 and IX2 are used. The database manager first
generates a bitmap from the results of applying selection WORKDEPT = ’E01’
against index IX1 (using key positioning). The database manager then generates a
bitmap from the results of applying selection SALARY>50000 against index IX2
(again using key positioning).
Next, the database manager combines these two bitmaps into one using OR logic.
Finally, a data space scan is initiated. The data space scan uses the bitmap to skip
through the data space records, retrieving only those selected by the bitmap.
This example also shows an additional capability provided with bitmap processing
(use of an index for ANDed selection was already possible but bitmap processing
now allows more than one index). When using bitmap processing, multiple index
usage is possible with selections where OR is the major boolean operator.
| The messages created by the PRTSQLINF command when used to describe this
| query would look like:
| SQL4010 Arrival sequence access for file 1.
| SQL4032 Access path IX1 used for bitmap processing of file 1.
| SQL4032 Access path IX2 used for bitmap processing of file 1.
| CPI4329 Arrival sequence access was used for file EMPLOYEE.
| CPI4388 2 Access path(s) used for bitmap processing of file EMPLOYEE.
| If the bitmap is used in conjunction with either the key selection or key positioning
method, it implies that the bitmap (generated from tertiary indexes) is being used to
aid a primary index access. The following example illustrates a query where bitmap
processing is used in conjunction with the key positioning for a primary index:
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 417
CREATE INDEX PIX ON EMPLOYEE (LASTNAME)
CREATE INDEX TIX1 ON EMPLOYEE (WORKDEPT)
CREATE INDEX TIX2 ON EMPLOYEE (SALARY)
OPNQRYF example:
OPNQRYF FILE((EMPLOYEE))
QRYSLT('WORKDEPT *EQ ''E01'' *OR SALARY > 50000')
KEYFLD(LASTNAME)
In this example, indexes TIX1 and TIX2 are used in bitmap processing. The
database manager first generates a bitmap from the results of applying selection
WORKDEPT = ’E01’ against index TIX1 (using key positioning). It then generates a
bitmap from the results of applying selection SALARY>50000 against index TIX2
(again using key positioning).
| The database manager then combines these two bitmaps into one using OR logic.
| A key selection method is initiated using (primary) index PIX. For each entry in
| index PIX, the bitmap is checked. If the entry is selected by the bitmap, then the
| data space record is retrieved and processed.
| Bitmap processing can be used for join queries, as well. Since bitmap processing is
| on a per file basis, each file of a join can independently use or not use bitmap
| processing
| The following example illustrates a query where bitmap processing is used against
| the second file of a join query but not on the first file:
| CREATE INDEX EPIX ON EMPLOYEE(EMPNO)
| CREATE INDEX TIX1 ON EMPLOYEE(WORKDEPT)
| CREATE INDEX TIX2 ON EMPLOYEE(SALARY)
| DECLARE C1 CURSOR FOR
| SELECT * FROM PROJECT, EMPLOYEE
| WHERE RESEMP=EMPNO AND
| (WORKDEPT='E01' OR SALARY>50000)
| In this example, the optimizer decides that the join order is file PROJECT to file
| EMPLOYEE. Data space scan is used on file PROJECT. For file EMPLOYEE, index
| EPIX is used to process the join (primary index). Indexes TIX1 and TIX2 are used
| in bitmap processing.
| The database manager positions to the first record in file PROJECT. It then
| performs the join using index EPIX. Next, it generates a bitmap from the results of
| applying selection WORKDEPT=’E01’ against index TIX1 (using key positioning). It
| Next, the database manager combines these two bitmaps into one using OR logic.
| Finally, the entry that EPIX is currently positioned to is checked against the bitmap.
| The entry is either selected or rejected by the bitmap. If the entry is selected, the
| records are retrieved from the underlying data space. Next, index EPIX is probed
| for the next join record. When an entry is found, it is compared against the bitmap
| and either selected or rejected. Note that the bitmap was generated only once (the
| first time it was needed) and is just reused after that.
| The query optimizer debug messages put into the job log would look like:
| CPI4327 File PROJECT processed in join position 1.
| CPI4326 File EMPLOYEE processed in join position 2.
| CPI4338 2 Access path(s) used for bitmap processing of file EMPLOYEE.
| An index with keys (WORKDEPT, FIRSTNAME) would be the best index to use to
| satisfy this query. However, two indexes, one with a key of WORKDEPT and the
| other with a key of FIRSTNME could be used in bitmap processing, with their
| resulting bitmaps ANDed together and data space scan used to retrieve the result.
| With the bitmap processing method, you can create several indexes, each with only
| one key field, and have the optimizer use them as general purpose indexes for
| many queries. You can avoid problems involved with trying to determine the best
| composite key indexes for all queries being performed against a table. Bitmap
| processing, in comparison to using a multiple key field index, allows more ease of
| use, but at some cost to performance. Keep in mind that you will always achieve
| the best performance by using composite key indexes.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 419
| the first record fetch from the OPNQRYF open identifier), these updated records
| will not be retrieved on subsequent fetches through the OPNQRYF open
| identifier.
| For this reason, the query optimizer will not consider bitmap processing if the
| ALWCPYDTA option is *NO. The exception to this is if the query contains
| grouping or one or more aggregate functions (for example, SUM, COUNT, MIN,
| MAX), in which case static data is already being made.
| v Do not use bitmap processing for a query that is insert, update, or delete
| capable. For OPNQRYF, the OPTION parameter must be set to *INP and the
| SEQONLY parameter must be set to *YES. There must not be any overrides to
| SEQONLY(*NO)).
|
Data Access Methods
The following table provides a summary of the data management methods
discussed.
Table 38. Summary of Data Management Methods
Access
Method Selection Process Good When Not Good When Selected When Advantages
“Dataspace Reads all rows. > 20% rows < 20% rows No ordering, Minimizes page I/O
Scan Access Selection criteria selected. selected. grouping, or joining through
Method” on applied to data in and > 20% rows pre-fetching.
page 399 dataspace. selected.
“Parallel Data retrieved from > 20% rows < 20% rows No ordering, Minimizes wait time
Pre-Fetch auxiliary storage in selected. selected. Query is grouping, or joining for page I/O
Access parallel streams. CPU bound. and > 20% rows through parallel
1. Adequate
Method” on Reads all rows. selected. pre-fetching.
active memory
page 401 Selection criteria
available.
applied to data in
dataspace. 2. Query would
otherwise be
I/O bound.
3. Data spread
across multiple
disk units.
“Parallel Data read and > 10% rows < 10% rows 1. DB2 UDB Significant
Data Space selected in parallel selected, large selected. Query is Symmetric performance
Scan tasks. table. CPU bound on a Multiprocessing especially on
Method uniprocessor installed. multiprocessors.
1. Adequate
(available system.
active memory 2. I/O bound or
only when
available. running on a
the DB2
2. Data spread multi-processor
UDB
across multiple system.
Symmetric
Multiprocessing disk units.
feature is 3. DB2 UDB
installed)” on Symmetric
page 403 Multiprocessing
installed.
4. Multi-processor
system.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 421
Table 38. Summary of Data Management Methods (continued)
Access
Method Selection Process Good When Not Good When Selected When Advantages
“Index Only Done in All columns used in < 20% rows All columns used in Reduced I/O to the
Access combination with the query exist as selected or small the query exist as dataspace.
Method” on any of the other key fields. DB2 result set of rows. key fields and DB2
page 412 index access UDB Symmetric UDB Symmetric
methods Multiprocessing Multiprocessing is
must be installed. installed.
“Parallel Index or table data Excessive random Active memory is Excessive random Random page I/O
Table or loaded in parallel activity would already activity would result is avoided which
Index Based to avoid random otherwise occur over-committed. from processing can improve I/O
Pre-load access. against the object the query and bound queries.
Access and active memory active memory is
Method” on is available to hold available which can
page 413 the entire object. hold the entire
object.
“Hashing Rows with common Longer running Short running Join or grouping Reduce random
Access values are grouped grouping and/or queries. specified. I/O when
Method” on together. join queries. compared to index
page 415(Parallel methods. If DB2
or UDB Symmetric
non-parallel) Multiprocessing is
installed, possible
exploitation of SMP
parallelism.
“Bitmap Key position/key Selection can be >25% rows Indexes match Reduces page I/O
Processing selection used to applied to index selected. selection criteria. to the data space.
Method” on build bitmap. and either >5% or Allows multiple
page 416 Bitmap used to <25% rows indexes per table.
avoid touching selected or an OR
rows in table. operator is involved
in selection that
precludes the use
of only one index.
The Optimizer
The optimizer is an important part of DB2 UDB for AS/400 because the optimizer:
v Makes the key decisions which affect database performance.
v Identifies the techniques which could be used to implement the query.
v Selects the most efficient technique.
Data manipulation statements such as SELECT specify only what data the user
wants, not how to get to that data. This access path to the data is chosen by the
optimizer and stored in the access plan. This section covers the techniques
employed by the query optimizer for performing this task including:
v Cost estimation
v Access plan validation
v Join optimization
v Grouping optimization
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 423
| v *FIRSTIO–Minimize the time required to retrieve the first buffer of
| records from the file. Biases the optimization toward not creating an
| index. Either a data scan or an existing index is preferred. When
| *FIRSTIO is selected, users may also pass in the number of records
| they expect to retrieve from the query. The optimizer uses this value to
| determine the percentage of records that will be returned and optimizes
| accordingly. A small value would minimize the time required to retrieve
| the first n records, similar to *FIRSTIO. A large value would minimize the
| time to retrieve all n records, similar to *ALLIO.
| v *ALLIO–Minimize the time to process the whole query assuming that all
| query records are read from the file. Does not bias the optimizer to any
| particular access method.
Page faults can also be greatly affected if index only access can be performed,
thus eliminating any random I/O to the data space.
Key range estimate is a method the optimizer uses to gain more accurate
estimates of the number of expected rows selected from one or more selection
predicates. The optimizer estimates by applying the selection predicates against
the left-most keys of an existing index. The default filter factors can then be
further refined by the estimate based on the key range. If an index exists whose
left-most keys match columns used in row selection predicates, that index can be
used to estimate the number of keys that match the selection criteria. The
estimate of the number of keys is based on the number of pages and key density
of the machine index and is done without actually accessing the keys. Full
indexes over columns used in selection predicates can significantly help
optimization.
The time limit factor controls how much time is spent choosing an implementation. It
is based on how much time was spent so far and the current best implementation
cost found. Dynamic SQL queries are subject to the optimizer time restrictions.
Static SQL queries optimization time is not limited. For OPNQRYF, if you specify
OPTALLAP(*YES), the optimization time is not limited.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 425
For small tables, the query optimizer spends little time in query optimization. For
large tables, the query optimizer considers more indexes. Generally, the optimizer
considers five or six indexes (for each table of a join) before running out of
optimization time.
Join Optimization
| A join operation is a complex function that requires special attention in order to
| achieve good performance. This section describes how DB2 UDB for AS/400
| implements inner join queries and how optimization choices are made by the query
| optimizer. It also describes design tips and techniques which help avoid or solve
| performance problems.
| The optimization for other types of joins, LEFT OUTER JOIN or EXCEPTION JOIN
| (OPNQRYF JDFTVAL(*YES) or JDFTVAL(*ONLYDFT) parameter), is similar except
| that the join order is always the same as the order of the tables specified in the
| FROM clause (OPNQRYF FILE paramter). Information about these types of joins
| will not be detailed here, but most of the information and tips in this section also
| apply to joins of this type.
| The query optimizer might also decide to break the query into these two parts to
| improve performance when the SQL ALWCPYDTA(*OPTIMIZE) precompiler
| parameter or the OPNQRYF KEYFLD, and ALWCPYDTA(*OPTIMIZE)
| parameters are specified.
v All rows that satisfy the join condition from each secondary dial are located using
a keyed access path. Rows are retrieved from secondary tables in random
sequence. This random disk I/O time often accounts for a large percentage of the
processing time of the query. Since a given secondary dial is searched once for
each row selected from the primary and the preceding secondary dials that
satisfy the join condition for each of the preceding secondary dials, a large
number of searches may be performed against the later dials. Any inefficiencies
in the processing of the later dials can significantly inflate the query processing
time. This is the reason why attention to performance considerations for join
queries can reduce the run-time of a join query from hours to minutes.
v Again, all selected rows from secondary dials are accessed through a keyed
access path. If an efficient keyed access path cannot be found, a temporary
keyed access path is created. Some join queries build temporary access paths
over secondary dials even when an access path exists for all of the join keys.
Because efficiency is very important for secondary dials of longer running
queries, the query optimizer may choose to build a temporary keyed access path
which contains only keys which pass the local row selection for that dial. This
preprocessing of row selection allows the database manager to process row
selection in one pass instead of each time rows are matched for a dial.
Hash Join
The hash join method is similar to nested loop join. Instead of using keyed access
paths to locate the matching rows in a secondary table, however, a hash temporary
result table is created that contains all of the rows selected by local selection
against the table. The structure of the hash table is such that rows with the same
join value are loaded into the same hash table partition (clustered). The location of
the rows for any given join value can be found by applying a hashing function to the
join value.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 427
v Unlike indexes, entries in hash tables are not updated to reflect changes of
column values in the underlying table. The existence of a hash table does not
affect the processing cost of other updating jobs in the system.
The query attribute DEGREE, which can be changed by using the Change Query
attribute CL command (CHGQRYA), does not enable or disable the optimizer from
choosing to use hash join. However, hash join queries can use SMP parallelism if
the query attribute DEGREE is set to either *OPTIMIZE, *MAX, or *NBRTASKS.
Hash join is used in many of the same cases where a temporary index would have
been built. Join queries which are most likely to be implemented using hash join are
those where either:
v All rows in the various tables of the join are involved in producing result rows.
v Significant non-join selection is specified for the tables of the join which reduces
the number of rows in the tables that are involved with the join result.
The following is an example of a join query that would process all of the rows from
the queried tables:
SELECT *
FROM EMPLOYEE, EMP_ACT
WHERE EMPLOYEE.EMPNO = EMP_ACT.EMPNO
OPTIMIZE FOR 99999999 ROWS
| OPNQRYF example :
| OPNQRYF FILE((EMPLOYEE EMP_ACT)) FORMAT(FORMAT1)
| JFLD((1/EMPNO 2/EMPNO *EQ))
| ALWCPYDTA(*OPTIMIZE)
The following is an example of a join query that would have the queried tables of
the join queried significantly reduced by local selection:
SELECT EMPNO, LASTNAME, DEPTNAME
FROM EMPLOYEE, DEPARTMENT
WHERE EMPLOYEE.WORKDEPT = DEPARTMENT.DEPTNO
AND EMPLOYEE.HIREDATE BETWEEN 1996-01-30 AND 1995-01-30
AND DEPARTMENT.DEPTNO IN ('A00', 'D01', 'D11', 'D21', 'E11')
OPTIMIZE FOR 99999999 ROWS
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE DEPARTMENT))
| FORMAT(FORMAT2)
| QRYSLT('1/HIREDATE *EQ %RANGE(''1996-01-30'' ''1995-01-30'')
| *AND 2/DEPTNO *EQ %VALUES(''A00'' ''D01'' ''D11'' ''D21''
| ''E11''')
| JFLD((1/WORKDEPT 2/DEPTNO *EQ))
| ALWCPYDTA(*OPTIMIZE)
The messages created by the PRTSQLINF CL command to describe this hash join
query in an SQL program would appear as follows:
SQL402A Hashing algorithm used to process join.
SQL402B File EMPLOYEE used in hash join step 1.
SQL402B File DEPARTMENT used in hash join step 2.
When ordering, grouping, non-equal selection specified with operands derived from
columns of different tables, or result columns are derived from columns of different
tables, the hash join processing will be done and the result rows of the join will be
written to a temporary table. Then, as a second step, the query will be completed
using the temporary table.
The following is an example of a join query with selection specified with operands
derived from columns of different tables:
SELECT EMPNO, LASTNAME, DEPTNAME
FROM EMPLOYEE, DEPARTMENT
WHERE EMPLOYEE.WORKDEPT = DEPARTMENT.DEPTNO
AND EMPLOYEE.EMPNO > DEPARTMENT.MGRNO
OPTIMIZE FOR 99999999 ROWS
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE DEPARTMENT)
| FORMAT(FORMAT2)
| JFLD((1/WORKDEPT 2/DEPTNO *EQ) (1/EMPNO 2/MGRNO
| *GT))
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 429
| This query is implemented using the following steps:
1. A temporary hash table is built over table DEPARTMENT with a key of
DEPTNO. This occurs when the query is opened.
2. For each row retrieved from the EMPLOYEE table, the temporary hash table will
be probed for the matching join values.
3. For each matching row found, a result row is written to a temporary table.
4. After all of the join result rows are written to the temporary table, rows that are
selected by EMPNO > MGRNO are read from the temporary file and returned to
the application.
The messages created by the PRTSQLINF CL command to describe this hash join
query in an SQL program would appear as follows:
SQL402A Hashing algorithm used to process join.
SQL402B File EMPLOYEE used in hash join step 1.
SQL402B File DEPARTMENT used in hash join step 2.
SQL402C Temporary result table created for hash join query.
Join specifications which are not implemented for the dial are either deferred until
they can be processed in a later dial or, if an inner join was being performed for this
dial, processed as row selection.
For a given dial, the only join specifications which are usable as join columns for
that dial are those being joined to a previous dial. For example, for the second dial
the only join specifications that can be used to satisfy the join condition are join
specifications which reference columns in the primary dial. Likewise, the third dial
can only use join specifications which reference columns in the primary and the
second dials and so on. Join specifications which reference later dials are deferred
until the referenced dial is processed.
For any given dial, only one type of join operator is normally implemented. For
example, if one inner join join specification has a join operator of ’=’ and the other
has a join operator of ’>’, the optimizer attempts to implement the join with the ’=’
operator. The ’>’ join specification is processed as row selection after a matching
row for the ’=’ specification is found. In addition, multiple join specifications that use
| the same operator are implemented together.
| Note: For OPNQRYF, only one type of join operator is allowed for either a left
| outer or an exception join.
| When looking for an existing keyed access path to access a secondary dial, the
query optimizer looks at the left-most key columns of the access path. For a given
dial and keyed access path, the join specifications which use the left-most key
columns can be used. For example:
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE, EMP_ACT)) FORMAT(FORMAT1)
| JFLD((1/EMPNO 2/EMP_ACT *EQ)(1/HIREDATE 2/EMSTDATE
| *EQ))
| For the keyed access path over EMP_ACT with key columns EMPNO, PROJNO,
and EMSTDATE, the join operation is performed only on column EMPNO. After the
join is processed, row selection is done using column EMSTDATE.
The query optimizer also uses local row selection when choosing the best use of
the keyed access path for the secondary dial. If the previous example had been
expressed with a local predicate as:
DECLARE BROWSE2 CURSOR FOR
SELECT * FROM EMPLOYEE, EMP_ACT
WHERE EMPLOYEE.EMPNO = EMP_ACT.EMPNO
AND EMPLOYEE.HIREDATE = EMP_ACT.EMSTDATE
AND EMP_ACT.PROJNO = '123456'
OPTIMIZE FOR 99999 ROWS
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE, EMP_ACT)) FORMAT(FORMAT2)
| QRYSLT('2/PROJNO *EQ ''123456''')
| JFLD((1/EMPNO 2/EMP_ACT *EQ)(1/HIREDATE 2/EMSTDATE
| *EQ))
| the keyed access path with key columns EMPNO, PROJNO, and EMSTDATE are
fully utilized by combining join and selection into one operation against all three key
columns.
When creating a temporary keyed access path, the left-most key columns are the
usable join columns in that dial position. All local row selection for that dial is
processed when selecting keys for inclusion into the temporary keyed access path.
A temporary keyed access path is similar to the access path created for a
select/omit keyed logical file. The temporary index for the previous example would
have key fields of EMPNO and EMSTDATE.
| Since the OS/400 query optimizer attempts a combination of join and local record
| selection when determining access path usage, it is possible to achieve almost all
| of the same advantages of a temporary keyed access path by use of an existing
| access path. In the above example, using either implementation, an existing index
| may be used or a temporary index may be created. A temporary access path would
| have been built with the local row selection on PROJNO applied during the access
| path’s creation; the temporary access path would have key fields of EMP_ACT and
| EMSTDATE (to match the join selection). If, instead, an existing keyed access path
| was used with key fields of EMP_ACT, PROJNO, EMSTDATE (or PROJNO,
| EMP_ACT, EMSTDATE or EMSTDATE, PROJNO, EMP_ACT or ...) the local record
| selection could be applied at the same time as the join selection (rather than prior
| to the join selection, as happens when the temporary access path is created).
| The implementation using the existing index is more likely to provide faster
| performance because join and selection processing are combined without the
| overhead of building a temporary index. However, the use of the existing keyed
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 431
| access path may have just slightly slower I/O processing than the temporary access
| path because the local selection is run many times rather than once. In general, it is
| a good idea to have existing indexes available with key columns for the combination
| of join columns and columns using equal selection as the left-most keys.
| 1-2 2-1 1-3 3-1 1-4 4-1 2-3 3-2 2-4 4-2 3-4 4-3
| 4. Choose the combination with the lowest join cost.
| If the cost is nearly the same, then choose the combination which selects the
| fewest rows.
| 5. Determine the cost, access method, and expected number of rows for each
| remaining table joined to the previous secondary table.
| 6. Select an access method for each table that has the lowest cost for that table.
| 7. Choose the secondary table with the lowest join cost.
| If the cost is nearly the same, choose the combination which selects the fewest
| rows.
| 8. Repeat steps 4 through 7 until the lowest cost join order is determined.
| When a join logical file is referenced, a left outer or an excpetion join, or the join
| order is forced to the specified file order, the query optimizer loops through all of the
| dials in the order specified, and determines the lowest cost access methods.
As the query optimizer compares the various possible access choices, it must
assign a numeric cost value to each candidate and use that value to determine the
implementation which consumes the least amount of processing time. This costing
value is a combination of CPU and I/O time and is based on the following
assumptions:
v Table pages and keyed access path pages must be retrieved from auxiliary
storage. For example, the query optimizer is not aware that an entire table may
be loaded into active memory as the result of a SETOBJACC CL command.
The main factors of the join cost calculations for secondary dials are the number of
rows selected in all previous dials and the number of rows which match, on
average, each of the rows selected from previous dials. Both of these factors can
be derived by estimating the number of matching rows for a given dial.
When the join operator is something other than equal, the expected number of
matching rows is based on the following default filter factors:
v 33% for less-than, greater-than, less-than-equal-to, or greater-than-equal-to
v 90% for not equal
| v 25% for BETWEEN range (OPNQRYF %RANGE)
| v 10% for each IN list value (OPNQRYF %VALUES)
For example, when the join operator is less-than, the expected number of matching
rows is .33 * (number of rows in the dial). If no join specifications are active for the
current dial, the cartesian product is assumed to be the operator. For cartesian
products, the number of matching rows is every row in the dial, unless local row
selection can be applied to the keyed access path.
When the join operator is equal, the expected number of rows is the average
number of duplicate rows for a given value.
The AS/400 performs index maintenance (insertion and deletion of key values in an
index) and maintains a running count of the number of unique values for the given
key columns in the index. These statistics are bound with the index object and are
always maintained. The query optimizer uses these statistics when it is optimizing a
query. Maintaining these statistics adds no measurable amount of overhead to
index maintenance. This statistical information is only available for indexes which:
v Contain no varying length character keys.
Note: If you have varying length character columns used as join columns, you
can create an index which maps the varying length character column to a
fixed character key using the CRTLF CL command. An index that contains
fixed length character keys defined over varying length data supplies
average number of duplicate values statistics.
v Were created or rebuilt on an AS/400 system on which Version 2 Release 3 or a
later version is installed.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 433
Note: The query optimizer can use indexes created on earlier versions of
OS/400 to estimate if the join key values have a high or low average
number of duplicate values. If the index is defined with only the join keys,
the estimate is done based on the size of the index. In many cases,
additional keys in the index cause matching row estimates through that
index to not be valid. The performance of some join queries may be
improved by rebuilding these access paths.
Average number of duplicate values statistics are maintained only for the first 4
left-most keys of the index. For queries which specify more than 4 join columns, it
might be beneficial to create multiple additional indexes so that an index can be
found with average number of duplicate values statistics available within the 4
left-most key columns. This is particularly important if some of the join columns are
somewhat unique (low average number of duplicate values).
Using the average number of duplicate values for equal joins or the default filter
value for the other join operators, we now have the number of matching rows. The
following formula is used to compute the number of join rows from previous dials.
NPREV = Rp * M2 * FF2 * ..... *Mn * FFn .....
NPREV
The number of join rows from all previous dials.
Rp The number of rows selected from the primary dial.
M2 The number of matching rows for dial 2.
FF2 Filtering reduction factor for predicates local to dial 2 that are not already
applied using M2 above.
Mn The number of matching rows for dial n.
Note: Multiply the pair of matching rows (Mn) and filter reduction filter
factors (FFn) for each secondary dial preceding the current dial.
Now that it has calculated the number of join rows from previous dials, the optimizer
is ready to generate a cost for the access method.
This secondary dial access method is used if no usable keyed access path is found
or if the temporary keyed access path or hash table performs better than any
existing keyed access path. This method can be better than using any existing
access path because the row selection is completed when the keyed access path
or hash table is created if any of the following are true:
v The number of matches (MATCH) is high.
v The number of join rows from all previous dials (NPREV) is high.
v There is some filtering reduction (FF < 100%).
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 435
Temporary Keyed Access Path or Hash Table from Keyed Access
Path
The basic cost formula for this access method choice is the same as that of using a
temporary keyed access path or hash table built from a table, with one exception.
The cost to build the temporary keyed access path, CRTDSI, is calculated to
include the selection of the rows through an existing keyed access path. This
access method is used for join secondary dial access for the same reason.
However, the creation from a keyed access path might be less costly.
JSCOST
Join Secondary cost
NPREV
The number of join rows from all previous dials
MATCH
The number of matching keys which will be found in this keyed access path
(usually average duplicates)
KeyAccess
The cost to access a key in a keyed access path
FCost The cost to access a row from the table
FirstIO
A reduction ratio to reduce the non-startup cost because of an optimization
goal to optimize for the first buffer retrieval. For more information, see “Cost
Estimation” on page 423.
The query optimizer considers using an index which only has a subset of the join
columns as the left-most leading keys when:
v It is able to determine from the average number of duplicate values statistics that
the average number of rows with duplicate values is quite low.
v The number of rows being selected from the previous dials is small.
| OPNQRYF example:
| OPNQRYF FILE((EMPLOYEE EMP_ACT)) FORMAT(FORMAT1)
| QRYSLT('1/EMPNO *EQ ''000010''')
| JFLD((1/EMPNO 2/EMPNO *EQ))
|
| The following rules determine which predicates are added to other join dials:
v The dials affected must have join operators of equal.
v The predicate is isolatable, which means that a false condition from this
predicate would omit the row.
v One operand of the predicate is an equal join column and the other is a literal or
host variable.
| v The predicate operator is not LIKE or IN (OPNQRYF %WLDCRD, %VALUES, or
| *CT).
v The predicate is not connected to other predicates by OR.
v The join type for the dial is an inner join.
| The query optimizer generates a new predicate, whether or not a predicate already
| exists in the WHERE clause (OPNQRYF QRYSLT parameter).
| Some predicates are redundant. This occurs when a previous evaluation of other
| predicates in the query already determines the result that predicate provides.
| Redundant predicates can be specified by you or generated by the query optimizer
| during predicate manipulation. Redundant predicates with predicate operators of =,
| >, >=, <, <=, or BETWEEN (OPNQRYF *EQ, *GT, *GE, *LT, *LE, or %RANGE) are
| merged into a single predicate to reflect the most selective range.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 437
The optimizer will evaluate the join criteria along with any record selection that may
be specified in order to determine the join type for each dial and for the entire
query. Once this information is known the optimizer will generate additional
selection using the relative record number of the tables to simulate the different
types of joins that may occur within the query.
Since null values are returned for any unmatched rows for either a left outer or an
exception join, any isolatable selection specified for that dial, including any
additional join criteria that may be specified in the WHERE clause, will cause all of
the unmatched records to be eliminated (unless the selection is for an IS NULL
predicate). This will cause the join type for that dial to be changed to an inner join
(or an exception join) if the IS NULL predicate was specified.
In the following example a left outer join is specified between the tables
EMPLOYEE and DEPARTMENT. In the WHERE clause there are two selection
predicates that also apply to the DEPARTMENT table.
SELECT EMPNO, LASTNAME, DEPTNAME, PROJNO
FROM CORPDATA.EMPLOYEE XXX LEFT OUTER JOIN CORPDATA.DEPARTMENT YYY
ON XXX.WORKDEPT = YYY.DEPTNO
LEFT OUTER JOIN CORPDATA.PROJECT ZZZ
ON XXX.EMPNO = ZZZ.RESPEMP
WHERE XXX.EMPNO = YYY.MGRNO AND
YYY.DEPTNO IN ('A00', 'D01', 'D11', 'D21', 'E11')
Even though the join between the EMPLOYEE and the DEPARTMENT table was
changed to an inner join the entire query will still need to remain a left outer join to
satisfy the join condition for the PROJECT table.
Note: Care must be taken when specifying multiple join types since they are
supported by appending selection to the query for any unmatched rows. This
means that the number of resulting rows that satisfy the join criteria can
become quite large before any selection is applied that will either select or
omit the unmatched rows based on that individual dial’s join type.
For more information on how to use the JOIN syntax see either “Joining Data from
More Than One Table” on page 75 or the DB2 UDB for AS/400 SQL Reference
book.
Note: “Costing and Selecting Access Paths for Join Secondary dials” on
page 432 provides suggestions on how to avoid the restrictions about
indexes statistics or create additional indexes over the potential join
columns if they do not exist.
Note: The optimizer can better determine from the select/omit access path that
the data is not uniformly distributed.
v The query optimizer makes the wrong assumption about the number of rows
which will be retrieved from the answer set.
For SQL programs, specifying the precompile option ALWCPYDTA(*YES) makes
it more likely that the queries in that program will use an existing index. Likewise,
specifying ALWCPYDTA(*OPTIMIZE) makes it more likely that the queries in that
program will create a temporary index. The SQL clause OPTIMIZE FOR n
ROWS can also be used to influence the query optimizer.
| For the OPNQRYF command, the wrong performance option for the OPTIMIZE
| keyword may have been specified. Specify *FIRSTIO to make the use of an
| existing index more likely. Specify *ALLIO to make the creation of a temporary
| index more likely.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 439
Table 39. Checklist for Creating an Application that Uses Join Queries (continued)
What to Do How It Helps
Specify ALWCPYDTA(*OPTIMIZE) If the query is creating a temporary keyed access path, and you feel that the
or ALWCPYDTA(*YES) processing time would be better if the optimizer only used the existing access
path, specify ALWCPYDTA(*YES).
If the query is not creating a temporary keyed access path, and you feel that the
processing time would be better if a temporary keyed access path was created,
specify ALWCPYDTA(*OPTIMIZE).
| Alternatively, specify the OPTIMIZE FOR n ROWS to inform the optimizer of the
| application has intention to read every resulting row. To do this set n to a large
| number. You could also set n to a small number before ending the query.
| For OPNQRYF, specify If the query is creating a temporary keyed access path and you feel that the
| OPTIMIZE(*FIRSTIO) or processing time would be better if it would only use the existing access path, then
| OPTIMIZE(*ALLIO) specify OPTIMIZE(*FIRSTIO). If the query is not creating a temporary keyed
| access path and you feel that the processing time would be better if a temporary
| keyed access path was created then specify OPTIMIZE(*ALLIO).
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 441
Table 39. Checklist for Creating an Application that Uses Join Queries (continued)
What to Do How It Helps
| Specify join predicates to prevent This improves performance by reducing the join fan-out. Every secondary file
| all of the records from one file should have at least one join predicate that references on of its fields as a ’join-to’
| from being joined to every record field.
| in the other file.
| Grouping Optimization
This section describes how DB2 UDB for AS/400 implements grouping techniques
and how optimization choices are made by the query optimizer.
The time required to receive the first group result for this implementation will most
likely be longer than other grouping implementations because the hash table must
be built and populated first. Once the hash table is completely populated, the
database manager uses the table to start returning the grouping results. Before
returning any results, the database manager must apply any specified grouping
selection criteria or ordering to the summary entries in the hash table.
The grouping hash method is most effective when the consolidation ratio is high.
The consolidation ratio is the ratio of the selected table rows to the computed
grouping results. If every database table row has its own unique grouping value,
then the hash table will become too large. This in turn will slow down the hashing
access method.
The optimizer estimates the consolidation ratio by first determining the number of
unique values in the specified grouping columns (that is, the expected number of
groups in the database table). The optimizer then examines the total number of
rows in the table and the specified selection criteria and uses the result of this
examination to estimate the consolidation ratio.
Indexes over the grouping columns can help make the optimizer’s ratio estimate
more accurate. Indexes improve the accuracy because they contain statistics that
include the average number of duplicate values for the key columns.
The optimizer also uses the expected number of groups estimate to compute the
number of partitions in the hash table. As mentioned earlier, the hashing access
method is more effective when the hash table is well-balanced. The number of hash
table partitions directly affects how entries are distributed across the hash table and
the uniformity of this distribution.
The hash function performs better when the grouping values consist of columns that
have non-numeric data types, with the exception of the integer (binary) data type. In
addition, specifying grouping value columns that are not associated with the
variable length and null column attributes allows the hash function to perform more
effectively.
Since the index, by definition, already has all of the key values grouped together,
the first group result can be returned in less time than the hashing method. This is
because of the temporary result that is required for the hashing method. This
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 443
implementation can be beneficial if an application does not need to retrieve all of
the group results or if an index already exists that matches the grouping columns.
When the grouping is implemented with an index and a permanent index does not
already exist that satisfies grouping columns, a temporary index is created. The
grouping columns specified within the query are used as the key fields for this
index.
The following example illustrates a query where the optimizer could eliminate a
grouping column.
| DECLARE DEPTEMP CURSOR FOR
| SELECT EMPNO, LASTNAME, WORKDEPT
| FROM CORPDATA.EMPLOYEE
| WHERE EMPNO = '000190'
| GROUP BY EMPNO, LASTNAME, WORKDEPT
| OPNQRYF example:
OPNQRYF FILE(EMPLOYEE) FORMAT(FORMAT1)
QRYSLT('EMPNO *EQ ''000190''')
GRPFLD(EMPNO LASTNAME WORKDEPT)
In this example, the optimizer can remove EMPNO from the list of grouping fields
because of the EMPNO = '000190' selection predicate. An index that only has
LASTNAME and WORKDEPT specified as key fields can be considered to
implement the query and if a temporary index or hash is required then EMPNO will
not be used.
Note: Even though EMPNO can be removed from the list of grouping columns, the
optimizer might still choose to use that index if a permanent index exists with
all three grouping columns.
The following example illustrates a query where the optimizer could add an
additional grouping column.
| CREATE INDEX X1 ON EMPLOYEE
| (LASTNAME, EMPNO, WORKDEPT)
|
| DECLARE DEPTEMP CURSOR FOR
| SELECT LASTNAME, WORKDEPT
| FROM CORPDATA.EMPLOYEE
| WHERE EMPNO = '000190'
| GROUP BY LASTNAME, WORKDEPT
|
|
444 DB2 UDB for AS/400 SQL Programming V4R4
| OPNQRYF example:
OPNQRYF FILE ((EMPLOYEE)) FORMAT(FORMAT1)
QRYSLT('EMPNO *EQ ''000190''')
GRPFLD(LASTNAME WORKDEPT)
For this query request, the optimizer can add EMPNO as an additional grouping
| column when considering X1 for the query.
| The query optimizer will chose to use the index IX1. The SLIC runtime code
| will scan the index until it finds the first non-null value for SALARY. Assuming
| that SALARY is not null, the runtime code will position to the first index key
| and return that key value as the MAX of salary. No more index keys will be
| processed.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 445
| Example 2, using the OPNQRYF command:
| OPNQRYF FILE(EMPLOYEE) FORMAT(FORMAT2)
| QRYSLT('JOB *EQ ''CLERK''')
| GRPFLD((DEPT))
| MAPFLD((MINSAL '%MIN(SALARY)'))
| The query optimizer will chose to use Index IX2. The SLIC runtime code will
| position to the first group for DEPT where JOB equals ’CLERK’ and will return
| the SALARY. The code will then skip to the next DEPT group where JOB
| equals ’CLERK’.
| v For join queries:
| – All grouping columns must be from a single file.
| – For each dial there can be at most one MIN or MAX column function operand
| that references the dial and no other column functions can exist in the query.
| – If the MIN or MAX function operand is from the same dial as the grouping
| columns, then it uses the same rules as single file queries.
| – If the MIN or MAX function operand is from a different dial then the join field
| for that dial must join to one of the grouping fields and the index for that dial
| must contain the join fields followed by the MIN or MAX operand.
| Example 1, using SQL:
| CREATE INDEX IX1 ON DEPARTMENT(DEPTNAME)
|
| CREATE INDEX IX2 ON EMPLOYEE(WORKDEPT, SALARY)
|
| DECLARE C1 CURSOR FOR
| SELECT DEPTNAME, MIN(SALARY)
| FROM DEPARTMENT, EMPLOYEE
| WHERE DEPARTMENT.DEPTNO=EMPLOYEE.WORKDEPT
| GROUP BY DEPARTMENT.DEPTNO;
|
If DB2 UDB for AS/400 cannot use an index to access the data in a table, it will
have to read all the data in the table. Very large tables present a special
performance problem: the high cost of retrieving all the data in the table. The
following suggestions help you to design code that allows DB2 UDB for AS/400 to
take advantage of available indexes.
1. Avoid numeric conversions.
| When a column value and a host variable (or literal value) are being compared,
| try to specify the same data types and attributes. DB2 UDB for AS/400 does not
| use an index for the named column if the host variable or literal value has a
| greater precision than the precision of the column. If the two items being
| compared have different data types, DB2 UDB for AS/400 will have to convert
| instead of
... WHERE EDUCLVL < 1.1E1 AND
EDUCLVL > 1.3
| instead of
| ... QRYSLT('EDUCLVL *LT 1.1E1 *AND EDUCLVL *GT 1.3')
| If an index was created over the EDUCLVL column, then the optimizer does not
use the index in the second example because the precision of the constant is
greater than the precision of the column. In the first example, the optimizer
considers using the index, because the precisions are equal.
| 2. Avoid arithmetic expressions
| You should never have an arithmetic expression as an operand to be compared
| to a column in a row selection predicate. The optimizer does not use an index
| on a field that is being compared to an arithmetic expression. When using SQL,
| specify:
| ... WHERE SALARY > 16500
|
| instead of
| ... WHERE SALARY > 15000*1.1
|
| 3. Avoid character string padding.
Try to use the same data length when comparing a fixed-length character string
column value to a host variable or literal value. DB2 UDB for AS/400 does not
use an index if the literal value or host variable is longer than the column length.
For example, EMPNO is CHAR(6) and DEPTNO is CHAR(3). Specify:
... WHERE EMPNO > '000300' AND
DEPTNO < 'E20'
| instead of
| ... WHERE EMPNO > '000300 ' AND
| DEPTNO < 'E20 '
instead of
... QRYSLT('EMPNO *GT "000300" *AND DEPTNO *LT "E20"')
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 447
4. Avoid the use of like patterns beginning with % or _.
The percent sign (%), and the underline (_), when used in the pattern of a LIKE
(OPNQRYF %WLDCRD) predicate, specify a character string that is similar to
the column value of rows you want to select. They can take advantage of
indexes when used to denote characters in the middle or at the end of a
character string, as in the following. When using SQL:
... WHERE LASTNAME LIKE 'J%SON%'
However, when used at the beginning of a character string, they can prevent
DB2 UDB for AS/400 from using any indexes that might be defined on the
LASTNAME column to limit the number of rows scanned. When using SQL:
... WHERE LASTNAME LIKE '%SON'
You should therefore avoid using these symbols at the beginning of character
strings, especially if you are accessing a particularly large table.
5. Be aware that DB2 UDB for AS/400 does not use an index in the following
instances:
v For a column that is expected to be updated; for example, your program
might include
EXEC SQL
DECLARE DEPTEMP CURSOR FOR
SELECT EMPNO, LASTNAME, WORKDEPT
FROM CORPDATA.EMPLOYEE
WHERE (WORKDEPT = 'D11' OR
WORKDEPT = 'D21') AND
EMPNO = '000190'
FOR UPDATE OF EMPNO, WORKDEPT
END-EXEC.
OPNQRYF example:
OPNQRYF FILE((CORPDATA/EMPLOYEE)) OPTION(*ALL)
QRYSLT('(WORKDEPT *EQ ''D11'' *OR WORKDEPT *EQ ''D21'')
*AND EMPNO *EQ ''000190''')
Even if you do not intend to update the employee’s department, DB2 UDB for
AS/400 cannot use an index with a key of WORKDEPT.
DB2 UDB for AS/400 can use an index if all of the updateable columns used
within the index are also used within the query as an isolatable selection
predicate with an equal operator. In the previous example DB2 UDB for
AS/400 would use an index with a key of EMPNO.
DB2 UDB for AS/400 can operate more efficiently if the FOR UPDATE OF
column list only names the column you intend to update: WORKDEPT.
Therefore, do not specify a column in the FOR UPDATE OF column list
unless you intend to update the column.
OPNQRYF example:
OPNQRYF FILE (EMPLOYEE) FORMAT(FORMAT1)
QRYSLT('WORKDEPT *EQ ADMRDEPT')
Even though there is an index for WORKDEPT and another index for
ADMRDEPT, DB2 UDB for AS/400 will not use either index. The index has
no added benefit because every row of the table needs to be looked at.
The sort sequence table associated with the query (specified by the SRTSEQ and
LANGID parameters) must match the sort sequence table with which the existing
index was built. DB2 UDB for AS/400 compares the sort sequence tables. If they do
not match, the existing index cannot be used.
There is an exception to this, however. If the sort sequence table associated with
the query is a unique-weight sequence table (including *HEX), DB2 UDB for AS/400
acts as though no sort sequence table is specified for selection, join, or grouping
columns that use the following operators and predicates:
v equal (=) operator
v not equal (|= or <>) operator
| v LIKE predicate (OPNQRYF %WLDCRD and *CT)
| v IN predicate (OPNQRYF %VALUES)
When these conditions are true, DB2 UDB for AS/400 is free to use any existing
index where the key fields match the columns and either:
v The index does not contain a sort sequence table or
v The index contains a unique-weight sort sequence table
Note: The table does not have to match the unique-weight sort sequence table
associated with the query.
Note: Bitmap processing has a special consideration when multiple indexes are
used for a table. If two or more indexes have a common key field between
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 449
them that is also referenced in the query selection, then those indexes must
either use the same sort sequence table or use no sort sequence table.
Ordering
Unless the optimizer chooses to do a sort to satisfy the ordering request, the sort
sequence table associated with the index must match the sort sequence table
associated with the query.
When a sort is used, the translation is done during the sort. Since the sort is
handling the sort sequence requirement, this allows DB2 UDB for AS/400 to use
any existing index that meets the selection criteria.
Example Indexes
For the purposes of the examples, assume three indexes are created.
Assume an index HEXIX was created with *HEX as the sort sequence.
CREATE INDEX HEXIX ON STAFF (JOB)
Example 1
Equals selection with no sort sequence table (SRTSEQ(*HEX)).
SELECT * FROM STAFF
WHERE JOB = 'MGR'
DB2 UDB for AS/400 could use either index HEXIX or index UNQIX.
Example 2
Equals selection with a unique-weight sort sequence table
(SRTSEQ(*LANGIDUNQ) LANGID(ENU)).
SELECT * FROM STAFF
WHERE JOB = 'MGR'
| DB2 UDB for AS/400 could use either index HEXIX or index UNQIX.
Example 4
Greater than selection with a unique-weight sort sequence table
(SRTSEQ(*LANGIDUNQ) LANGID(ENU)).
SELECT * FROM STAFF
WHERE JOB > 'MGR'
Example 5
Join selection with a unique-weight sort sequence table (SRTSEQ(*LANGIDUNQ)
LANGID(ENU)).
SELECT * FROM STAFF S1, STAFF S2
WHERE S1.JOB = S2.JOB
| DB2 UDB for AS/400 could use either index HEXIX or index UNQIX for either
query.
Example 6
Join selection with a shared-weight sort sequence table (SRTSEQ(*LANGIDSHR)
LANGID(ENU)).
SELECT * FROM STAFF S1, STAFF S2
WHERE S1.JOB = S2.JOB
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 451
SELECT *
FROM STAFF S1 INNER JOIN STAFF S2
ON S1.JOB = S2.JOB
| DB2 UDB for AS/400 could only use index SHRIX for either query.
Example 7
Ordering with no sort sequence table (SRTSEQ(*HEX)).
SELECT * FROM STAFF
WHERE JOB = 'MGR'
ORDER BY JOB
Example 8
Ordering with a unique-weight sort sequence table (SRTSEQ(*LANGIDUNQ)
LANGID(ENU)).
SELECT * FROM STAFF
WHERE JOB = 'MGR'
ORDER BY JOB
Example 9
Ordering with a shared-weight sort sequence table (SRTSEQ(*LANGIDSHR)
LANGID(ENU)).
SELECT * FROM STAFF
WHERE JOB = 'MGR'
ORDER BY JOB
| DB2 UDB for AS/400 could use either index HEXIX or index UNQIX for selection.
Ordering would be done during the sort using the *LANGIDUNQ sort sequence
table.
Example 11
Grouping with no sort sequence table (SRTSEQ(*HEX)).
SELECT JOB FROM STAFF
GROUP BY JOB
| DB2 UDB for AS/400 could use either index HEXIX or index UNQIX.
Example 12
Grouping with a unique-weight sort sequence table (SRTSEQ(*LANGIDUNQ)
LANGID(ENU)).
SELECT JOB FROM STAFF
GROUP BY JOB
| DB2 UDB for AS/400 could use either index HEXIX or index UNQIX.
Example 13
Grouping with a shared-weight sort sequence table (SRTSEQ(*LANGIDSHR)
LANGID(ENU)).
SELECT JOB FROM STAFF
GROUP BY JOB
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 453
| DB2 UDB for AS/400 could only use index SHRIX.
The following examples assume 3 more indexes are created over columns JOB and
SALARY. The CREATE INDEX statements precede the examples.
Assume an index HEXIX2 was created with *HEX as the sort sequence.
CREATE INDEX HEXIX2 ON STAFF (JOB, SALARY)
Assume an index UNQIX2 was created and the sort sequence is a unique-weight
sort sequence.
CREATE INDEX UNQIX2 ON STAFF (JOB, SALARY)
Example 14
Ordering and grouping on the same columns with a unique-weight sort sequence
table (SRTSEQ(*LANGIDUNQ) LANGID(ENU)).
SELECT JOB, SALARY FROM STAFF
GROUP BY JOB, SALARY
ORDER BY JOB, SALARY
| DB2 UDB for AS/400 could use UNQIX2 to satisfy both the grouping and ordering
requirements. If index UNQIX2 did not exist, DB2 UDB for AS/400 would create an
index using a sort sequence table of *LANGIDUNQ.
Example 15
Ordering and grouping on the same columns with ALWCPYDTA(*OPTIMIZE) and a
unique-weight sort sequence table (SRTSEQ(*LANGIDUNQ) LANGID(ENU)).
SELECT JOB, SALARY FROM STAFF
GROUP BY JOB, SALARY
ORDER BY JOB, SALARY
| DB2 UDB for AS/400 could use UNQIX2 to satisfy both the grouping and ordering
requirements. If index UNQIX2 did not exist, DB2 UDB for AS/400 would either:
v Create an index using a sort sequence table of *LANGIDUNQ or
v Use index HEXIX2 to satisfy the grouping and to perform a sort to satisfy the
ordering
| DB2 UDB for AS/400 could use SHRIX2 to satisfy both the grouping and ordering
requirements. If index SHRIX2 did not exist, DB2 UDB for AS/400 would create an
index using a sort sequence table of *LANGIDSHR.
Example 17
Ordering and grouping on the same columns with ALWCPYDTA(*OPTIMIZE) and a
shared-weight sort sequence table (SRTSEQ(*LANGIDSHR) LANGID(ENU).
SELECT JOB, SALARY FROM STAFF
GROUP BY JOB, SALARY
ORDER BY JOB, SALARY
| DB2 UDB for AS/400 could use SHRIX2 to satisfy both the grouping and ordering
requirements. If index SHRIX2 did not exist, DB2 UDB for AS/400 would create an
index using a sort sequence table of *LANGIDSHR.
Example 18
Ordering and grouping on different columns with a unique-weight sort sequence
table (SRTSEQ(LANGIDUNQ) LANGID(ENU)).
SELECT JOB, SALARY FROM STAFF
GROUP BY JOB, SALARY
ORDER BY SALARY, JOB
| DB2 UDB for AS/400 could use index HEXIX2 or index UNQIX2 to satisfy the
grouping requirements. A temporary result would be created containing the grouping
results. A temporary index would then be built over the temporary result using a
*LANGIDUNQ sort sequence table to satisfy the ordering requirements.
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 455
Example 19
Ordering and grouping on different columns with ALWCPYDTA(*OPTIMIZE) and a
unique-weight sort sequence table (SRTSEQ(*LANGIDUNQ) LANGID(ENU)).
SELECT JOB, SALARY FROM STAFF
GROUP BY JOB, SALARY
ORDER BY SALARY, JOB
| DB2 UDB for AS/400 could use index HEXIX2 or index UNQIX2 to satisfy the
grouping requirements. A sort would be performed to satisfy the ordering
requirements.
Example 20
Ordering and grouping on different columns with ALWCPYDTA(*OPTIMIZE) and a
shared-weight sort sequence table (SRTSEQ(*LANGIDSHR) LANGID(ENU)).
SELECT JOB, SALARY FROM STAFF
GROUP BY JOB, SALARY
ORDER BY SALARY, JOB
| DB2 UDB for AS/400 could use index SHRIX2 to satisfy the grouping requirements.
A sort would be performed to satisfy the ordering requirements.
When you define a table with variable-length data, you must decide the width of the
ALLOCATE area. If the primary goal is:
v Space saving: use ALLOCATE(0).
v Performance: the ALLOCATE area should be wide enough to incorporate at
least 90% to 95% of the values for the column.
This example shows how space can be saved by using variable-length columns.
The fixed-length column table uses the most space. The table with the carefully
calculated allocate sizes uses less disk space. The table that was defined with no
allocate size (with all of the data stored in the overflow area) uses the least disk
space.
Number of
Total Records in
Variety of Last Name First Name Middle Name Physical File Overflow
Support Max/Alloc Max/Alloc Max/Alloc Size Space
Fixed Length 22 22 22 567 K 0
Variable 40/10 40/10 40/7 408 K 73
Length
Variable- 40/0 40/0 40/0 373 K 8600
Length
Default
If you are using host variables to insert or update variable-length columns, the host
variables should be variable length. Because blanks are not truncated from
fixed-length host variables, using fixed-length host variables would cause more rows
to spill into the overflow space. This would increase the size of the table.
In this example, fixed-length host variables are used to insert a row into a table:
01 LAST-NAME PIC X(40).
...
MOVE "SMITH" TO LAST-NAME.
EXEC SQL
INSERT INTO PHONEDIR
VALUES(:LAST-NAME, :FIRST-NAME, :MIDDLE-NAME, :PHONE)
END-EXEC.
The host-variable LAST-NAME is not variable length. The string “SMITH”, followed
by 35 blanks, is inserted into the VARCHAR column LAST. The value is longer than
the allocate size of 10. Thirty of thirty-five trailing blanks are in the overflow area.
In this example, variable-length host variables are used to insert a row into a table:
Chapter 24. DB2 UDB for AS/400 Data Management and Query Optimizer Tips 457
01 VLAST-NAME.
49 LAST-NAME-LEN PIC S9(4) BINARY.
49 LAST-NAME-DATA PIC X(40).
...
MOVE "SMITH" TO LAST-NAME-DATA.
MOVE 5 TO LAST-NAME-LEN.
EXEC SQL
INSERT INTO PHONEDIR
VALUES(:VLAST-NAME, :VFIRST-NAME, :VMIDDLE-NAME, :PHONE)
END-EXEC.
The host variable VLAST-NAME is variable length. The actual length of the data is
set to 5. The value is shorter than the allocated length. It can be placed in the fixed
portion of the column.
For more information about using variable-length host variables, see Chapter 12.
Coding SQL Statements in C and C++ Applications, through Chapter 17. Coding
SQL Statements in REXX Applications.
Running the RGZPFM command against tables that contain variable-length columns
can improve performance. The fragments in the overflow area that are not in use
are compacted by the RGZPFM command. This reduces the read time for rows that
overflow, increases the locality of reference, and produces optimal order for serial
batch processing.
To minimize the number of opens, DB2 UDB for AS/400 leaves the open data path
(ODP) open and reuses the ODP if the statement is run again, unless:
v GROUP BY contains columns from more than one table.
v The ODP used a host variable to build a subset temporary index. The OS/400
database support may choose to build a temporary index with entries for only the
rows that match the row selection specified in the SQL statement. If a host
variable was used in the row selection, the temporary index will not have the
entries required for a different value contained in the host variable.
v Ordering was specified on a host variable value.
v A host variable is used to specify the pattern of a LIKE predicate. The host
variable value has either underscores (_) or involves more than one search
pattern: for ’%ABC%DEF’, two patterns are involved, ABC and DEF.
v An Override Database File (OVRDBF) or Delete Override (DLTOVR) CL
command has been issued since the ODP was opened, which would affect the
SQL statement execution.
Note: Only overrides that affect the name of the table being referred to will
cause the ODP to be closed within a given program invocation.
v A change to the library list since the last open has occurred, which would change
the file selected by an unqualified referral in system naming mode.
v The file being queried is a join logical file and its join type (JDFTVAL) does not
match the join type specified in the query.
v The format specified for a logical file references more than one physical file.
v The file is a complex SQL view that requires a temporary file to contain the
results of the SQL view.
DB2 UDB for AS/400 only reuses ODPs opened by the same statement. An
identical statement coded later in the program does not reuse an ODP from any
other statement. If the identical statement must be run in the program many times,
code it once in a subroutine and call the subroutine to run the statement.
You can control whether DB2 UDB for AS/400 keeps the ODPs open by:
v Designing the application so a program that issues an SQL statement is always
on the call stack
v Using the CLOSQLCSR(*ENDJOB) or CLOSQLCSR(*ENDACTGRP) parameter
DB2 UDB for AS/400 does an open operation for the first execution of each
UPDATE WHERE CURRENT OF when any expression in the SET clause contains
an operator or function. The open can be avoided by coding the function or
operation in the host language code.
For example, the following UPDATE causes DB2 UDB for AS/400 to do an open
operation:
EXEC SQL
FETCH EMPT INTO :SALARY
END-EXEC.
EXEC SQL
UPDATE CORPDATA.EMPLOYEE
SET SALARY = :SALARY + 1000
WHERE CURRENT OF EMPT
END-EXEC.
EXEC SQL
The CL commands Trace Job (TRCJOB) or Display Journal (DSPJRN) can be used
to determine the number of opens being performed by an SQL statement.
The SQL run-time automatically blocks records with the database manager in the
following cases:
v INSERT
If an INSERT statement contains a select-statement, inserted records are
blocked and not actually inserted into the target table until the block is full. The
SQL run-time automatically does blocking for blocked inserts.
Note: If an INSERT with a VALUES clause is specified, the SQL run-time might
not actually close the internal cursor used to perform the inserts until the
program ends. If the same INSERT statement is run again, a full open is
not necessary and the application runs much faster.
v OPEN
Blocking is done under the OPEN statement when the records are retrieved if all
of the following conditions are true:
– The cursor is only used for FETCH statements.
– No EXECUTE or EXECUTE IMMEDIATE statements are in the program, or
ALWBLK(*ALLREAD) was specified, or the cursor is declared with the FOR
FETCH ONLY clause.
An SQL application that uses a FETCH statement, without the FOR n ROWS
clause, can be improved by using the multiple-row FETCH statement to retrieve
multiple rows. After the host structure array or row storage area has been filled by
the FETCH, the application can loop through the data in the array or storage area
to process each of the individual records. The statement runs faster because the
SQL run-time was called only once and all the data was simultaneously returned to
the application program.
You can change the application program to allow the database manager to block
the records that the SQL run-time retrieves from the tables. For more information,
see “Improving Performance by Using Database Manager Blocking Considerations”
on page 461.
In the following table, the program attempted to FETCH 100 rows into the
application. Note the differences in the table for the number of calls to SQL run-time
and the database manager when blocking can be performed.
Table 40. Number of Calls Using a FETCH Statement
Database Manager Not Using Database Manager Using
Blocking Blocking
Single-Row FETCH 100 SQL calls 100 database 100 SQL calls 1 database call
Statement calls
Multiple-Row FETCH 1 SQL run-time call 100 1 SQL run-time call 1
Statement database calls database call
In the following table, the program attempted to INSERT 100 rows into a table. Note
the differences in the number of calls to SQL run-time and to the database manager
when blocking can be performed.
Table 41. Number of Calls Using an INSERT Statement
Database Manager Not Using Database Manager Using
Blocking Blocking
Single-Row INSERT 100 SQL run-time calls 100 100 SQL run-time calls 1
Statement database calls database call
Multiple-Row INSERT 1 SQL run-time call 100 1 SQL run-time call 1
Statement database calls database call
EXEC SQL
OPEN C1
END-EXEC.
EXEC SQL
CLOSE C1
END-EXEC.
* Show the display and wait for the user to indicate that
* the next 20 rows should be displayed.
EXEC SQL
DECLARE C2 CURSOR FOR
SELECT EMPNO, LASTNAME, WORKDEPT
FROM CORPDATA.EMPLOYEE
WHERE EMPNO > :LAST-EMPNO
ORDER BY EMPNO
END-EXEC.
EXEC SQL
CLOSE C2
END-EXEC.
In the above example, notice that an additional cursor had to be opened to continue
the list and to get current data. This could result in creating an additional ODP that
would increase the processing time on the AS/400 system. In place of the above
example, the programmer could design the application specifying
ALWCPYDTA(*NO) with the following SQL statements:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, LASTNAME, WORKDEPT
FROM CORPDATA.EMPLOYEE
ORDER BY EMPNO
END-EXEC.
EXEC SQL
OPEN C1
END-EXEC.
* Show the display and wait for the user to indicate that
* the next 20 rows should be displayed.
EXEC SQL
CLOSE C1
END-EXEC.
In the above example, the query could perform better if the FOR 20 ROWS clause
was used on the multiple-row FETCH statement. Then, the 20 rows would be
retrieved in one operation.
Note: The hashing method cannot be used to implement the grouping on queries
that involve a nested loop join implementation and do not require a
temporary result to be created.
The optimize ratio = optimize for n rows value / estimated number of rows in
answer set.
Cost using a temporarily created index:
In the previous examples, the estimated cost to sort or to create an index is not
adjusted by the optimize ratio. This enables the optimizer to balance the
optimization and preprocessing requirements. If the optimize number is larger than
the number of rows in the result table, no adjustments are made to the cost
estimates. If the OPTIMIZE clause is not specified for a query, a default value is
used based on the statement type, value of ALWCPYDTA specified, or output
device.
When used properly, the CLOSQLCSR parameter can reduce the number of SQL
OPEN, PREPARE, and LOCK statements needed. It can also simplify applications
by allowing you to retain cursor positions across program calls.
EXEC SQL
OPEN DEPTDATA
END-EXEC.
EXEC SQL
FETCH DEPTDATA INTO :EMPNUM, :LNAME
END-EXEC.
EXEC SQL
CLOSE DEPTDATA
END-EXEC.
If this program is called several times from another SQL program, it will be able to
use a reusable ODP. This means that, as long as SQL remains active between the
calls to this program, the OPEN statement will not require a database open
operation. However, the cursor is still positioned to the first result row after each
OPEN statement, and the FETCH statement will always return the first row.
EXEC SQL
FETCH DEPTDATA INTO :EMPNUM, :LNAME
END-EXEC.
The result of this strategy is that each call to the program retrieves the next record
in the cursor. On subsequent data requests, the OPEN statement is unnecessary
and, in fact, fails with a -502 SQLCODE. You can ignore the error, or add code to
skip the OPEN. This can be done by using a FETCH statement first, and only
running the OPEN statement if the FETCH operation failed.
This technique also applies to prepared statements. A program could first try the
EXECUTE, and if it fails, perform the PREPARE. The result is that the PREPARE
would only be needed on the first call to the program, assuming the correct
CLOSQLCSR option was chosen. Of course, if the statement can change between
calls to the program, it should perform the PREPARE in all cases.
The main program could also control this by sending a special parameter on the
first call only. This special parameter value would indicate that because it is the first
call, the subprogram should perform the OPENs, PREPAREs, and LOCKs.
Note: If you are using COBOL programs, do not use the STOP RUN statement.
When the first COBOL program on the call stack ends or a STOP RUN
statement runs, a reclaim resource (RCLRSC) operation is done. This
operation closes the SQL cursor. The *ENDSQL option does not work as
desired.
Some of these options may be suitable for most of your applications. Use the
command CRTDUPOBJ to create a copy of the SQL CRTSQLxxx command and
the CHGCMDDFT command to customize the optimal values for the precompile
parameters. The DSPPGM, DSPSRVPGM, DSPMOD, or PRTSQLINF commands
can be used to show the precompile options used for an existing program object.
The original method was to pass each host variable as a separate parameter. For
example:
CALL QSQROUTE
(SQLCA, hostvariable1, hostvariable2, hostvariable3)
The second method is to create a data structure with an element for each host
variable referenced in the statement. Then that data structure could be passed as a
parameter. For example:
CALL QSQROUTE
(SQLCA, hostvariable structure)
Note: The structure parameter passing technique is not used for SQL statements
for special cases in PL/I and RPG for AS/400 programs (see “Differences in
PL/I Because of Structure Parameter Passing Techniques” on page 294 and
“Differences in RPG for AS/400 Because of Structure Parameter Passing
Techniques” on page 307.
The precompiler creates the structure so that the SQL header is first, followed by
the input host variables, the output host variables, the indicators for the input host
variables, and the indicators for the output host variables. Because the output host
variables are created in a contiguous storage space, the SQL run-time support can
check for a match with the I/O buffer (each result column attribute is checked for a
matching host variable attribute) and move the data with one instruction if they all
match.
The SQL header on the structure contains information unique to the statement so
that SQL does not have to reconstruct the information on each call. Some of this
information is added and processed by the SQL run-time support. SQL run-time
support creates an SQLDA internally for each statement that uses host variables.
With the original parameter list, the host variable address could be different for each
call of the statement and, therefore, SQL run-time support rebuilds the SQLDA for
each call of the statement. With structure parameter passing, the structure is
created as a static variable, and the address of the elements will not change. SQL
run-time support builds the SQLDA the first time the statement is called, and saves
the SQLDA so it can be used on future calls of the statement.
With the original type of parameter passing, the number of host variables that could
be referred to in a program was approximately 4000 because of an architecture limit
of 4100 pointers in a program. Each parameter required a pointer. With the
In the above example, if ATABLE has only one or two columns, the SQLCODE
will be set to +326. When the assignment to C from the SQL structure is done,
the contents of A and B will be blank instead of the value of the column
corresponding to A and B.
v With the original parameter passing technique, SQLCODE -302 or -304 is
returned when a conversion error occurs (because of numeric data that is not
valid) while processing the data for a host variable. However, with the structure
parameter passing technique, SQL does not detect this error. The conversion
error occurs in the host language statements that reference the host variable. For
example, if a DECIMAL(5,2) input host variable contains the invalid data
’FFFFFF’X, an error will occur in the host language when the data is moved into
the data structure.
v The structure created by SQL uses names that start with the letters SQL. If
existing programs use variable names starting with SQL, those names may
conflict with the SQL-created names.
v The contents of the SQL-created data structure must not be changed by the
application programs.
Even though parallelism has been enabled for a system or given job, the individual
queries that run in a job might not actually use a parallel method. This might be
because of functional restrictions, or the optimizer might choose a non-parallel
Because queries being processed with parallel access methods aggressively use
main storage, CPU, and disk resources, the number of queries that use parallel
processing should be limited and controlled.
The default value of the QQRYDEGREE system value is *NONE, so the value must
be changed if parallel query processing is desired as the default for jobs run on the
system.
Changing this system value affects all jobs that will be run or are currently running
on the system whose DEGREE query attribute is *SYSVAL. However, queries that
have already been started or queries using reusable ODPs are not affected.
Changing the DEGREE query attribute does not affect queries that have already
been started or queries using reusable ODPs.
(1) (2)
ÊÊ CHGQRYA Ê
*
JOB( user-name/ job-name )
job-number/
Ê Ê
*SAME *SAME
QRYTIMLMT( *NOMAX ) DEGREE ( *NONE )
*SYSVAL *IO
seconds *OPTIMIZE
*MAX
*SYSVAL
*ANY
*NBRTASKS number-of-tasks
Ê ÊÍ
*SAME *SAME
ASYNCJ ( *LOCAL ) APYRMT ( *YES )
*DIST *NO
*NONE
*ANY
Notes:
1. Value *ANY is equivalent to value *IO.
2. All parameters preceding this point can be specified in positional form.
| Using a number of tasks less than the number of processors available on the
| system restricts the number of processors used simultaneously for running a
| given query. A larger number of tasks ensures that the query is allowed to use
| all of the processors available on the system to run the query. Too many tasks
| can degrade performance because of the over commitment of active memory
| and the overhead cost of managing all of the tasks.
*SYSVAL
Specifies that the processing option used should be set to the current value of
the QQRYDEGREE system value.
*ANY
Parameter value *ANY has the same meaning as *IO. The *ANY value is
maintained for compatibility with prior releases.
See the CL Reference (Abridged) book for more information about the CHGQRYA
command.
See “Performance Information Messages” on page 382 for the specific meanings of
the debug messages.
| PRTSQLINF gives output that is similar to the information you can get from debug
| messages, but PRTSQLINF must be run against a saved access plan. The query
| optimizer automatically logs information messages about the current query
| processing when your job is in debug mode. So, query debug messages work at
| runtime while PRTSQLINF works retroactively.
| You can monitor a specific job or all jobs on the system. The statistics gathered are
| placed in the output database file specified on the command. Each job in the
| system can be monitored concurrently by two monitors:
| v One started specifically on that job
| v One started for all jobs in the system
| When a job is monitored by two monitors, each monitor is logging records to a
| different output file. You can identify records in the output database file by each
| record’s unique identification number.
You can use these performance statistics to generate various reports. For instance,
you can include reports that show queries that:
v Use an abundance of the system resources.
v Take an extremely long time to execute.
v Did not run because of the query governor time limit.
v Create a temporary keyed access path during execution
v Use the query sort during execution
v Could perform faster with the creation of a keyed logical file containing keys
suggested by the query optimizer.
Note: A query that is cancelled by an end request generally does not generate
performance statistics.
You can also specify a force record write option that allows you to control how
many records are kept in the record buffer of each job being monitored before
forcing the records to be written to the output file. By specifying a force record write
value of 1, FRCRCD(1), monitor records will appear in the log as soon as they are
created. FRCRCD(1) also ensures that the physical sequence of the records are
most likely, but not guaranteed, to be in time sequence. However, FRCRCD(1) will
cause the most negative performance impact on the jobs being monitored. By
specifying a larger number for the FRCRCD parameter, the performance impact of
monitoring can be lessened.
If the monitor is started on all jobs, any jobs waiting on job queues or any jobs
started during the monitoring period will have statistics gathered from them once
they begin. If the monitor is started on a specific job, that job must be active in the
system when the command is issued. Each job in the system can be monitored
concurrently by only two monitors:
v One started specifically on that job.
v One started on all jobs in the system.
When a job is monitored by two monitors and each monitor is logging to a different
output file, monitor records will be written to both logs for this job. If both monitors
have selected the same output file then the monitor records are not duplicated in
the output file.
When an all job monitor is ended, all of the jobs on the system will be triggered to
close the output file, however, the ENDDBMON command can complete before all
of the monitored jobs have written their final performance records to the log. Use
the work with object locks, WRKOBJLCK, CL command to see that all of the
monitored jobs no longer hold locks on the output file before assuming the
monitoring is complete.
Note: The database monitor logical files are keyed logical files that contain some
select/omit criteria. Therefore, there will be some maintenance overhead
associated with these files while the database monitor is active. The user
may wish to minimize this overhead while the database monitor is active,
especially if monitoring all jobs. When monitoring all jobs the number of
records generated could be quite large.
The index advisor information can be found in the Database Monitor logical files
QQQ3000, QQQ3001 and QQQ3002. The advisor information is stored in fields
QQIDXA, QQIDXK and QQIDXD. When the QQIDXA field contains a value of ’Y’
the optimizer is advising you to create an index using the key fields shown in field
QQIDXD. The intention of creating this index is to improve the performance of the
query.
In the list of key fields contained in field QQIDXD the optimizer has listed what it
considers the suggested primary and secondary key fields. Primary key fields are
fields that should significantly reduce the number of keys selected based on the
corresponding query selection. Secondary key fields are fields that may or may not
significantly reduce the number of keys selected.
The optimizer is able to perform key positioning over any combination of the
primary key fields, plus one additional secondary key field. Therefore it is important
that the first secondary key field be the most selective secondary key field. The
optimizer will use key selection with any of the remaining secondary key fields.
While key selection is not as fast as key positioning it can still reduce the number of
keys selected. Hence, secondary key fields that are fairly selective should be
included.
Field QQIDXK contains the number of suggested primary key fields that are listed in
field QQIDXD. These are the left-most suggested key fields. The remaining key
fields are considered secondary key fields and are listed in order of expected
selectivity based on the query. For example, assuming QQIDXK contains the value
of 4 and QQIDXD specifies 7 key fields, then the first 4 key fields specified in
QQIDXK would be the primary key fields. The remaining 3 key fields would be the
suggested secondary key fields.
It is up to the user to determine the true selectivity of any secondary key fields and
to determine whether those key fields should be included when creating the index.
When building the index the primary key fields should be the left-most key fields
followed by any of the secondary key fields the user chooses and they should be
prioritized by selectivity.
Note: After creating the suggested index and executing the query again, it is
possible that the query optimizer will choose not to use the suggested index.
Sample output of this query is shown in Table 42. The critical thing to understand is
the join criteria
WHERE A.QQJFLD = B.QQJFLD
AND A.QQUCNT = B.QQUCNT
If the query does not use SQL, the SQL information record (QQQ1000) is not
created. This makes it more difficult to determine which records in LIB/PERFDATA
pertain to which query. When using SQL, record QQQ1000 contains the actual SQL
statement text that matches the performance records to the corresponding query.
In this example, the output for all queries that performed table scans are shown in
Table 43.
Note: The fields selected from file QQQ1000 do return NULL default values if the
query was not executed using SQL. For this example assume the default
value for character data is blanks and the default value for numeric data is
an asterisk (*).
Table 43. Output for All Queries that Performed Table Scans
ODP
Lib Table Total Index Query Open Clock Recs Rows TOT_
Name Name Rows Advised OPNID Time Time Rtned Rtned TIME Statement Text
LIB1 TBL1 20000 Y 1.1 4.7 10 10 6.2 SELECT *
FROM LIB1/TBL1
WHERE FLD1 = 'A'
LIB1 TBL2 100 N 0.1 0.7 100 100 0.9 SELECT *
FROM LIB1/TBL2
LIB1 TBL1 20000 Y 2.6 4.4 32 32 7.1 SELECT *
FROM LIB1/TBL1
WHERE FLD1 = 'A'
AND FLD2 > 9000
LIB1 TBL4 4000 N QRY04 1.2 4.2 724 * *
If the SQL statement text is not needed, joining to file QQQ1000 is not necessary.
You can determine the total time and rows selected from data in the QQQ3014 and
QQQ3019 records.
There are two slight modifications from the first example. First, the selected fields
have been changed. Most important is the selection of field QQIDXD that contains a
list of possible key fields to use when creating the index suggested by the query
optimizer. Second, the query selection limits the output to those table scan queries
where the optimizer advises that an index be created (A.QQIDXA = ’Y’). Table 44
shows what the results might look like.
Table 44. Output with Recommended Key Fields
Advised Advised
Table Index Key Primary Query
Lib Name Name Advised Fields Key OPNID Statement Text
LIB1 TBL1 Y FLD1 1 SELECT * FROM LIB1/TBL1
WHERE FLD1 = 'A'
LIB1 TBL1 Y FLD1, 1 SELECT * FROM LIB1/TBL1
FLD2 WHERE FLD1 = 'B' AND
FLD2 > 9000
LIB1 TBL4 Y FLD1, 1 QRY04
FLD4
At this point you should determine whether it makes sense to create a permanent
index as advised by the optimizer. In this example, creating one index over
LIB1/TBL1 would satisfy all three queries since each use a primary or left-most key
field of FLD1. By creating one index over LIB1/TBL1 with key fields FLD1, FLD2,
there is potential to improve the performance of the second query even more. The
frequency these queries are run and the overhead of maintaining an additional
index over the file should be considered when deciding whether or not to create the
suggested index.
If you create a permanent index over FLD1, FLD2 the next sequence of steps
would be to:
1. Start the performance monitor again
2. Re-run the application
3. End the performance monitor
4. Re-evaluate the data.
It is likely that the three index-advised queries are no longer performing table scans.
Note: You have to refer to the description of field QQDYNR for definitions of
the dynamic replan reason codes.
3. How many indexes have been created over LIB1/TBL1?
SELECT COUNT(*)
FROM LIB/QQQ3002
WHERE QQTLN = 'LIB1'
AND QQTFN = 'TBL1'
4. What key fields are used for all indexes created over LIB1/TBL1 and what is
the associated SQL statement text?
SELECT A.QQTLN, A.QQTFN, A.QQIDXD, B.QQSTTX
FROM LIB/QQQ3002 A, LIB/QQQ1000 B
WHERE A.QQJFLD = B.QQJFLD
AND A.QQUCNT = B.QQUCNT
AND A.QQTLN = 'LIB1'
AND A.QQTFN = 'TBL1'
Note: This query shows key fields only from queries executed using SQL.
5. What key fields are used for all indexes created over LIB1/TBL1 and what was
the associated SQL statement text or query open ID?
SELECT A.QQTLN, A.QQTFN, A.QQIDXD,
B.QQOPID,C.QQSTTX
FROM LIB/QQQ3002 A INNER JOIN LIB/QQQ3014 B
ON (A.QQJFLD = B.QQJFLD AND
A.QQUCNT = B.QQUCNT)
LEFT OUTER JOIN LIB/QQQ1000 C
ON (A.QQJFLD = C.QQJFLD AND
A.QQUCNT = C.QQUCNT)
WHERE A.QQTLN = 'LIB1'
AND A.QQTFN = 'TBL1'
Note: This query shows key fields from all queries on the system.
6. What types of SQL statements are being performed? Which are performed
most frequently?
SELECT QQSTOP, COUNT(*)
FROM LIB/QQQ1000
GROUP BY QQSTOP
ORDER BY 2 DESC
7. Which SQL queries are the most time consuming? Which user is running these
queries?
SELECT (QQETIM - QQSTIM), QQUSER, QQSTTX
FROM LIB/QQQ1000
ORDER BY 1 DESC
8. Which queries are the most time consuming?
SELECT (A.QQTTIM + B.QQCLKT), A.QQOPID, C.QQSTTX
FROM LIB/QQQ3014 A LEFT OUTER JOIN LIB/QQQ3019 B
ON (A.QQJFLD = B.QQJFLD AND
A.QQUCNT = B.QQUCNT)
LEFT OUTER JOIN LIB/QQQ1000 C
ON (A.QQJFLD = C.QQJFLD AND
A.QQUCNT = C.QQUCNT)
ORDER BY 1 DESC
Note: This might be used within a report that will format the the interesting
data into a more readable format. For example, all reason code fields
could be expanded by the report to print the definition of the reason
code (i.e., physical field QQRCOD = ’T1’ means a table scan was
performed because no indexes exist over the queried file).
10. How many queries are being implemented with temporary files because a key
length of greater than 2000 bytes or more than 120 key fields was specified for
ordering?
SELECT COUNT(*)
FROM LIB/QQQ3004
WHERE QQRCOD = 'F6'
11. Which SQL queries were implemented with nonreusable ODPs?
SELECT B.QQSTTX
FROM LIB/QQQ3010 A, LIB/QQQ1000 B
WHERE A.QQJFLD = B.QQJFLD
AND A.QQUCNT = B.QQUCNT
AND A.QQODPI = 'N'
12. What is the estimated time for all queries stopped by the query governor?
SELECT QQEPT, QQOPID
FROM LIB/QQQ3014
WHERE QQGVNS = 'Y'
Note: This example assumes detail data has been collected into record
QQQ3019.
13. Which queries estimated time exceeds actual time?
SELECT A.QQEPT, (A.QQTTIM + B.QQCLKT), A.QQOPID,
C.QQTTIM, C.QQSTTX
FROM LIB/QQQ3014 A LEFT OUTER JOIN LIB/QQQ3019 B
ON (A.QQJFLD = B.QQJFLD AND
A.QQUCNT = B.QQUCNT)
LEFT OUTER JOIN LIB/QQQ1000 C
ON (A.QQJFLD = C.QQJFLD AND
A.QQUCNT = C.QQUCNT)
WHERE A.QQEPT/1000 > (A.QQTTIM + B.QQCLKT)
Note: This example assumes detail data has been collected into record
QQQ3019.
14. Should a PTF for queries that perform UNION exists be applied. It should be
applied if any queries are performing UNION. Do any of the queries perform
this function?
SELECT COUNT(*)
FROM QQQ3014
WHERE QQUNIN = 'Y'
|...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
A*
A* Database Monitor logical file 3003
A*
A R QQQ3003 PFILE(*CURLIB/QAQQDBMN)
A QQRID
A QQTIME
A QQJFLD
A QQRDBN
A QQSYS
A QQJOB
A QQUSER
A QQJNUM
A QQTHRD RENAME(QQI9) +
COLHDG('Thread' +
'Identifier')
A QQUCNT
A QQUDEF
A QQQDTN
A QQQDTL
A QQMATN
A QQMATL
A QQSTIM
A QQETIM
A QQRSS
A QQSSIZ RENAME(QQI1) +
COLHDG('Size of' +
'Sort' +
'Space')
A QQPSIZ RENAME(QQI2) +
COLHDG('Pool' +
'Size')
A QQPID RENAME(QQI3) +
COLHDG('Pool' +
'ID')
A QQIBUF RENAME(QQI4) +
COLHDG('Internal' +
'Buffer' +
'Length')
A QQEBUF RENAME(QQI5) +
COLHDG('External' +
'Buffer' +
'Length')
A QQRCOD
A K QQJFLD
A S QQRID CMP(EQ 3003)
|...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
A*
A* Database Monitor logical file 3006
A*
A R QQQ3006 PFILE(*CURLIB/QAQQDBMN)
A QQRID
A QQTIME
A QQJFLD
A QQRDBN
A QQSYS
A QQJOB
A QQUSER
A QQJNUM
A QQTHRD RENAME(QQI9) +
COLHDG('Thread' +
'Identifier')
A QQUCNT
A QQUDEF
A QQQDTN
A QQQDTL
A QQMATN
A QQMATL
A QQTLN
A QQTFN
A QQTMN
A QQPTLN
A QQPTFN
A QQPTMN
A QQRCOD
A K QQJFLD
A S QQRID CMP(EQ 3006)
AA The sort sequence table specified is different than the sort sequence
table that was used when this access plan was created.
AB Storage pool changed or DEGREE parameter of CHGQRYA command
changed.
AC The system feature DB2 multisystem has been installed or removed.
AD The value of the degree query attribute has changed.
AE A view is either being opened by a high level language or a view is
being materialized.
AF A user-defined type or user-defined function is not the same object as
the one referred to in the access plan.
B0 The options specified have changed as a result of the query options file
QAQQINI.
|...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
A*
A* Database Monitor logical file 3007
A*
A R QQQ3007 PFILE(*CURLIB/QAQQDBMN)
A QQRID
A QQTIME
A QQJFLD
A QQRDBN
A QQSYS
A QQJOB
A QQUSER
A QQJNUM
A QQTHRD RENAME(QQI9) +
COLHDG('Thread' +
'Identifier')
A QQUCNT
A QQUDEF
A QQQDTN
A QQQDTL
A QQMATN
A QQMATL
A QQTLN
A QQTFN
A QQTMN
A QQPTLN
A QQPTFN
A QQPTMN
A QQIDXN RENAME(QQ1000) +
COLHDG('Index' +
'Names')
A QQTOUT RENAME(QQC11) +
COLHDG('Optimizer' +
'Timed Out')
A K QQJFLD
A S QQRID CMP(EQ 3007)
|...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
A*
A* Database Monitor logical file 3008
A*
A R QQQ3008 PFILE(*CURLIB/QAQQDBMN)
A QQRID
A QQTIME
A QQJFLD
A QQRDBN
A QQSYS
A QQJOB
A QQUSER
A QQJNUM
A QQTHRD RENAME(QQI9) +
COLHDG('Thread' +
'Identifier')
A QQUCNT
A QQUDEF
A QQQDTN
A QQQDTL
A QQMATN
A QQMATL
A QQORGQ RENAME(QQI1) +
COLHDG('Original' +
'Number' +
'of QDTs')
A QQMRGQ RENAME(QQI2) +
COLHDG('Number' +
'of QDTs' +
'Merged')
A K QQJFLD
A S QQRID CMP(EQ 3008)
Figure 29. Summary record for Host Variable and ODP Implementation
Table 55. QQQ3010 - Summary record for Host Variable and ODP Implementation
Logical Field Name Physical Field Name Description
QQRID QQRID Record identification
QQTIME QQTIME Time record was created
QQJFLD QQJFLD Join field (unique per job)
QQRDBN QQRDBN Relational database name
QQSYS QQSYS System name
QQJOB QQJOB Job name
QQUSER QQUSER Job user
QQJNUM QQJNUM Job number
QQTHRD QQI9 Thread identifier
QQUCNT QQUCNT Unique count (unique per query)
QQRCNT QQRCNT Unique refresh counter
QQUDEF QQUDEF User defined field
QQODPI QQC11 ODP implementation
R - Reusable ODP (ISV)
N - Nonreusable ODP (V2)
’ ’ - Field not used
|...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
A*
A* Database Monitor logical file 3014
A*
A R QQQ3014 PFILE(*CURLIB/QAQQDBMN)
A QQRID
A QQTIME
A QQJFLD
A QQRDBN
A QQSYS
A QQJOB
A QQUSER
A QQJNUM
A QQTHRD RENAME(QQI9) +
COLHDG('Thread' +
'Identifier')
A QQUCNT
A QQUDEF
A QQQDTN
A QQQDTL
A QQMATN
A QQMATL
A QQREST
A QQEPT
A QQTTIM RENAME(QQI1) +
COLHDG('ODP' +
'Open' 'Time')
A QQORDG
A QQGRPG
A QQJNG
A QQJNTY RENAME(QQC22) +
COLHDG('Join' +
'Type')
The Start Database Monitor (STRDBMON) can constrain system resources when
collecting performance information. This overhead is mainly attributed to the fact
that performance information is written directly to a database file as the information
is collected. The memory-based collection mode reduces the system resources
consumed by collecting and managing performance results in memory. This allows
the monitor to gather database performance statistics with a minimal impact to the
performance of the system as whole (or to the performance of individual SQL
statements).
The DBMon monitor collects much of the same information as the STRDBMON
monitor, but the performance statistics are kept in memory. At the expense of some
detail, information is summarized for identical SQL statements to reduce the amount
of information collected. The objective is to get the statistics to memory as fast as
possible while deferring any manipulation or conversion of the data until the
performance data is dumped to a result file for analysis.
The DBMon monitor manages the data in memory combining and accumulating the
information into a series of record formats. This means that for each unique SQL
statement, information is accumulated from each run of the statement and the detail
information is only collected for the most expensive statement execution.
| A new set of APIs enable support for the DBMon monitor. An API supports each of
| the following activities:
| v Start the new monitor
| v Dump statistics to files
| v Clear the monitor data from memory
| v Query the monitor status
| v End the new monitor
| When you start the new monitor, information is stored in the local address space of
| each job that the system monitors. As each statement completes, the system
| moves information from the local job space to a common system space. If more
| statements are executed than can fit in this amount of common system space, the
| system drops the statements that have not been executed recently.
The information relating to the locked tables (QQQ3005) was omitted and the
replan information was combined with the QAQQQRYI, and QQQ3010 (ODP and
host variable information) is found in both QAQQ3010 and QAQQQRYI files.
AA The sort sequence table specified is different than the sort sequence table that
was used when this access plan was created.
AB Storage pool changed or DEGREE parameter of CHGQRYA command
changed.
AC The system feature DB2 multisystem has been installed or removed.
AD The value of the degree query attribute has changed.
AE A view is either being opened by a high level language or a view is being
materialized.
AF A user-defined type or user-defined function is not the same object as the one
referred to in the access plan.
B0 The options specified have changed as a result of the query options file
QAQQINI.
QQC11 Reserved
QQC12 Reserved
QQC21 Reserved
QQC22 Reserved
QQI1 Reserved
QQI2 Reserved
QQC301 Reserved
QQC302 Reserved
QQ1000 Reserved
| If you are using Operation Navigator with the support for the SQL Monitor, you have
| the ability to analyze the results direct through the GUI interface. There are a
| number of shipped queries that can be used or modified to extract the information
| from any of the files.
| The sample query listed below will give the user the Basic Statement Information
| about all of the statements that were monitored.
| SELECT
| /* Database Performance Monitor Basic Statement Information */
|
| /* Time */
| a.QQTIME as "Time",
|
| /* Costs */
| DECIMAL(QQMAXT/1000,18,3) as "Maximum Runtime",
| DECIMAL(QQAVGT/1000,18,3) as "Average Runtime",
| DECIMAL(QQMINT/1000,18,3) as "Minimum Runtime",
| DECIMAL(QQOPNT/1000,18,3) as "Maximum Open Time",
| DECIMAL(QQFETT/1000,18,3) as "Maximum Fetch Time ",
| DECIMAL(QQCLST/1000,18,3) as "Maximum Close Time",
| DECIMAL(QQOTHT/1000,18,3) as "Maximum Other Time ",
| QQMETU as "Most Expensive Use",
| QQLTU as "Last Use",
|
| /* Statement Identification */
| CASE QQSTOP
| Record Identification
| A new join key field has been generated (QQKEY) to ease joining multiple physical
| files together. This field replaces the job (QQJOB) and unique query counters
| (QQCNT) that the existing database monitor used. The join key field contains a
| unique identifier that allows all of the information for this query to be received from
| each of the physical files.
| This join key field does not replace all of the detail fields that are still required to
| identify the specific information about the individual steps of a query. The Query
| Definition templte (QDT) Number or the Subselect Number identifies information
| about each detailed step. Use these fields to identify which records belong to each
| step of the query process:
| v QQQDTN - Query Definition Template Number
| v QQQDTL - Query Definition Template Subselect Number (Subquery)
| v QQMATN - Materialized Query Definition Tempalte Number (View)
| v QQMATL - Materialized Query Definition Template Subselect Number (View w/
| Subquery)
| Use these fields when the monitored query contains a subquery, union, or a view
| operation. All query types can generate multiple QDT’s to satisfy the original query
| request. The system uses these fields to separate the information for each QDT
| while still allowing each QDT to be identified as belonging to this original query
| (QQKEY).
| Before the system intitiates a query, the system checks the query time limit against
| the estimated elapsed query time. The system also uses a time limit of zero to
| optimize performance on queries without having to run through several iterations.
| You can check the inquiry message CPA4259 for the predicted runtime and for what
| operations the query will perform. If the query is cancelled, debug messages will be
| written to the joblog.
| The “Chapter 23. Using the DB2 UDB for AS/400 Predictive Query Governor” on
| page 391 can stop the initiation of a query if the query’s estimated or predicted
| runtime (elapsed execution time) is excessive. The governor acts before a query is
| run instead of while a query is run. The governor can be used in any interactive or
| batch job on the AS/400. It can be used with all DB2 UDB for AS/400 query
| interfaces and is not limited to use with SQL queries.
The query options file QAQQINI is used to set the attributes used by the Query
Optimizer. For each query that is run the query option values are retrieved from the
QAQQINI file in the library specified on the QRYOPTLIB parameter of the
CHGQRYA CL command and used to optimize or implement the query.
If no library is specified for this parameter, then the library QUSRSYS is searched
for the existence of the QAQQINI file. If a query options file is not found for a query,
no attributes will be modified. The initial value of the QRYOPTLIB parameter for a
job is QUSRSYS.
Note: It is recommended that the file QAQQINI, in QSYS, not be modified. This is
the original template that is to be duplicated into QUSRSYS or a user
specified library for use.
| The QAQQINI file shipped in the library QSYS has been pre-populated with the
| following records:
Table 71. QAQQINI File Records. Description
QQPARM QQVAL
APPLY_REMOTE *DEFAULT
ASYNC_JOB_USAGE *DEFAULT
FORCE_JOIN_ORDER *DEFAULT
MESSAGES_DEBUG *DEFAULT
OPTIMIZE_STATISTIC_LIMITATION *DEFAULT
PARALLEL_DEGREE *DEFAULT
PARAMETER_MARKER_CONVERSION *DEFAULT
QUERY_TIME_LIMIT *DEFAULT
UDF_TIME_OUT *DEFAULT
For the following examples, a QAQQINI file has already been created in library
MyLib. To update an existing record in MyLib/QAQQINI use the UPDATE SQL
statment. This example sets MESSAGES_DEBUG = *YES so that the query
optimizer will print out the optimizer debug messages:
To insert a new record into MyLib/QAQQINI use the INSERT SQL statement. This
example adds the QUERY_TIME_LIMIT record with a value of *NOMAX to the
QAQQINI file:
INSERT INTO MyLib/QAQQINI
VALUES('QUERY_TIME_LIMIT','*NOMAX','New time limit set by DBAdmin')
The query options file, which resides in the library specified on the CHGQRYA CL
command QRYOPTLIB parameter, is always used by the query optimizer. This is
true even if the user has no authority to the query options library and file. This
provides the system administrator with an additional security mechanism.
When the QAQQINI file resides in the library QUSRSYS the query options will
effect all of the query users on the system. To prevent anyone from inserting,
deleting, or updating the query options, the system administrator should remove
update authority from *PUBLIC to the file. This will prevent users from changing the
data in the file.
When the QAQQINI file resides in a user library, specified on the CHGQRYA CL
command option QRYOPTLIB, the query options will effect all of the querys run for
that user’s job. To prevent the query options from being retrieved from a particular
library the system administrator can revoke authority to the CHGQRYA CL
command.
If an error occurs on the update of the QAQQINI file (an INSERT, DELETE, or
UPDATE operation), the following SQL0443 diagnostic message will be issued:
Trigger program or external routine detected an error.
To retrieve the same rows in reverse order, simply specify that the order is
descending, as in this statement:
SELECT * FROM DEPARTMENT
WHERE LOCATION = 'MINNESOTA'
ORDER BY DEPTNO DESC
A cursor on the second statement would retrieve rows in exactly the opposite order
from a cursor on the first statement. But that is guaranteed only if the first statement
specifies a unique ordering.
If both statements are required in the same program, it might be useful to have two
indexes on the DEPTNO column, one in ascending order and one in descending
order.
Once the cursor is positioned at the end of the table, the program can use the
PRIOR or RELATIVE scroll options to position and fetch data starting from the end
of the table.
If a multiple-row FETCH statement has been specified and run, the cursor is
positioned on the last row of the block. Therefore, if the WHERE CURRENT OF
clause is specified on the UPDATE statement, the last row in the block is updated.
If a row within the block must be updated, the program must first position the cursor
on that row. Then the UPDATE WHERE CURRENT OF can be specified. Consider
the following example:
Table 73. Updating a Table
Scrollable Cursor SQL Statement Comments
EXEC SQL
DECLARE THISEMP DYNAMIC SCROLL CURSOR FOR
SELECT EMPNO, WORKDEPT, BONUS
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = ’D11’
FOR UPDATE OF BONUS
END-EXEC.
EXEC SQL
OPEN THISEMP
END-EXEC.
EXEC SQL
WHENEVER NOT FOUND
GO TO CLOSE-THISEMP
END-EXEC.
EXEC SQL DEPTINFO and
FETCH NEXT FROM THISEMP IND-ARRAY are declared
FOR 5 ROWS in the program as a host
INTO :DEPTINFO :IND-ARRAY structure array and an
END-EXEC. indicator array.
... determine if any employees in department D11 receive a
bonus less than $500.00. If so, update that record to the new
minimum of $500.00.
EXEC SQL ... positions to the record in
FETCH RELATIVE :NUMBACK FROM THISEMP the block to update by
END-EXEC. fetching in the reverse
order.
... branch back to fetch and process the next block of rows.
CLOSE-THISEMP.
EXEC SQL
CLOSE THISEMP
END-EXEC.
Restrictions
You cannot use FOR UPDATE OF with a select-statement that includes any of
these elements:
v The first FROM clause identifies more than one table or view.
v The first FROM clause identifies a read-only view.
v The first SELECT clause specifies the keyword DISTINCT.
v The outer subselect contains a GROUP BY clause.
v The outer subselect contains a HAVING clause.
v The first SELECT clause contains a column function.
v The select-statement contains a subquery such that the base object of the outer
subselect and of the subquery is the same table.
v The select-statement contains a UNION or UNION ALL operator.
v The select-statement includes a FOR READ ONLY clause.
v The SCROLL keyword is specified without DYNAMIC.
If a FOR UPDATE OF clause is specified, you cannot update columns that were not
named in the FOR UPDATE OF clause. But you can name columns in the FOR
UPDATE OF clause that are not in the SELECT list, as in this example:
SELECT A, B, C FROM TABLE
FOR UPDATE OF A,E
Do not name more columns than you need in the FOR UPDATE OF clause;
indexes on those columns are not used when you access the table.
See the DB2 UDB for AS/400 SQL Reference book for information on how to use
the SQL ALTER TABLE statement. See the CL Reference (Abridged) book for
information on how to use the Change Physical File (CHGPF) CL command.
You can also dynamically create a view of the table, which includes only the
columns you want, in the order you want.
DB2 UDB for AS/400 supports two levels of distributed relational database:
v Remote unit of work (RUW)
Remote unit of work is where the preparation and running of SQL statements
occurs at only one application server during a unit of work. DB2 UDB for AS/400
supports RUW over either APPC or TCP/IP.
v Distributed unit of work (DUW)
Distributed unit of work is where the preparation and running of SQL statements
can occur at multiple applications servers during a unit of work. However, a
single SQL statement can only refer to objects located at a single application
server. DB2 UDB for AS/400 supports DUW over APPC only.
For more information on the SQL precompiler commands, see the topic
“Chapter 18. Preparing and Running a Program with SQL Statements” on page 333.
The create SQL Package (CRTSQLPKG) command lets you create an SQL
package from an SQL program that was created as a distributed program. Syntax
and parameter definitions for the CRTSQLPKG and CRTSQLxxx commands are
provided in Appendix D. DB2 UDB for AS/400 CL Command Descriptions.
To use these files and members, you need to run the SETUP batch job located in
the file QSQL/QSQSAMP. The SETUP batch job allows you to customize the
example to do the following:
v Create the QSQSAMP library at the local and remote locations.
v Set up relational database directory entries at the local and remote locations.
v Create application panels at the local location.
v Precompile, compile, and run programs to create distributed sample application
collections, tables, indexes, and views.
v Load data into the tables at the local and remote locations.
v Precompile and compile programs.
v Create SQL packages at the remote location for the application programs.
v Precompile, compile, and run the program to update the location column in the
department table.
Before running the SETUP, you may need to edit the SETUP member of the
QSQL/QSQSAMP file. Instructions are included in the member as comments. To
run the SETUP, specify the following command on the AS/400 command line:
To use the sample program, specify the following command on the command line:
========> ADDLIBLE QSQSAMP
The following display appears. From this display, you can customize your database
sample program.
DB2 for OS/400 ORGANIZATION APPLICATION
DATA.............: _______________________________
Bottom
F3=Exit
(C) COPYRIGHT IBM CORP. 1982, 1991
The Delete SQL Package (DLTSQLPKG) command allows you to delete an SQL
package on the local system.
An SQL package is not created unless the privileges held by the authorization ID
associated with the creation of the SQL package includes appropriate authority for
creating a package on the remote system (the application server). To run the
program, the authorization ID must include EXECUTE privileges on the SQL
package. On AS/400 systems, the EXECUTE privilege includes system authority of
*OBJOPR and *EXECUTE.
The syntax for the Create SQL Package (CRTSQLPKG) command is shown in
Appendix D. DB2 UDB for AS/400 CL Command Descriptions.
CRTSQLPKG Authorization
When creating an SQL package on an AS/400 system the authorization ID used
must have *USE authority to the CRTSQLPKG command.
The precompiler listing should be checked for unexpected messages when running
with a GENLVL greater than 10. When you are creating a package for a DB2
Universal Database, you must set the GENLVL parameter to a value less than 20.
If the RDB parameter specifies a system that is not a DB2 UDB for AS/400 system,
then the following options should not be used on the CRTSQLxxx command:
v COMMIT(*NONE)
v OPTION(*SYS)
v DATFMT(*MDY)
v DATFMT(*DMY)
v DATFMT(*JUL)
v DATFMT(*YMD)
v DATFMT(*JOB)
v DYNUSRPRF(*OWNER)
v TIMFMT(*HMS) if TIMSEP(*BLANK) or TIMSEP(’,’) is specified
v SRTSEQ(*JOBRUN)
Unit of Work
Because package creation implicitly performs a commit or rollback, the commit
definition must be at a unit of work boundary before the package creation is
attempted. The following conditions must all be true for a commit definition to be at
a unit of work boundary:
v SQL is at a unit of work boundary.
v There are no local or DDM files open using commitment control and no closed
local or DDM files with pending changes.
v There are no API resources registered.
v There are no LU 6.2 resources registered that are not associated with DRDA or
DDM.
Labels
You can use the LABEL ON statement to create a description for the SQL package.
Consistency Token
The program and its associated SQL package contain a consistency token that is
checked when a call is made to the SQL package. The consistency tokens must
match or the package cannot be used. It is possible for the program and SQL
package to appear to be uncoordinated. Assume the program is on the AS/400
system and the application server is another AS/400 system. The program is
running in session A and it is recreated in session B (where the SQL package is
also recreated). The next call to the program in session A could result in a
consistency token error. To avoid locating the SQL package on each call, SQL
maintains a list of addresses for SQL packages that are used by each session.
When session B re-creates the SQL package, the old SQL package is moved to the
QRPLOBJ library. The address to the SQL package in session A is still valid. (This
situation can be avoided by creating the program and SQL package from the
session that is running the program, or by submitting a remote command to delete
the old SQL package before creating the program.)
Before requesting that the remote system create an SQL package, the application
requester always converts the name specified on the RDB parameter, SQL package
name, library name, and the text of the SQL package from the CCSID of the job to
CCSID 500. This is required by DRDA. When the remote relational database is an
AS/400 system, the names are not converted from CCSID 500 to the job CCSID.
It is recommended that delimited identifiers not be used for table, view, index,
collection, library, or SQL package names. Conversion of names does not occur
between systems with different CCSIDs. Consider the following example with
system A running with a CCSID of 37 and system B running with a CCSID of 500.
v Create a program that creates a table with the name ″a¬b|c″ on system A.
v Save program ″a¬b|c″ on system A, then restore it to system B.
v The code point for ¬ in CCSID 37 is x’5F’ while in CCSID 500 it is x’BA’.
v On system B the name would display ″a[b]c″. If you created a program that
referenced the table whose name was ″a¬b|c.″, the program would not find the
table.
The at sign (@), pound sign (#), and dollar sign ($) characters should not be used
in SQL object names. Their code points depend on the CCSID used. If you use
delimited names or the three national extenders, the name resolution functions may
possibly fail in a future release.
TCP/IP terminology does not include the term ’conversation’. A similar concept
exists, however. With the advent of TCP/IP support by DRDA, use of the term
’conversation’ will be replaced, in this book, by the more general term ’connection’,
unless the discussion is specifically about an APPC conversation. Therefore, there
are now two different types of connections about which the reader must be aware:
SQL connections of the type described above, and ’network’ connections which
replace the term ’conversation’.
Where there would be the possibility of confusion between the two types of
connections, the word will be qualified by ’SQL’ or ’network’ to allow the reader to
understand the intended meaning.
SQL connections are managed at the activation group level. Each activation group
within a job manages its own connections and these connections are not shared
across activation groups. For programs that run in the default activation group,
connections are still managed as they were prior to Version 2 Release 3.
Call
SYSC (Remote)
Job:
System-Named Default Activation
Activation Group:
Group:
Connect
PGM2
SQL Package
for PGM2
Activation Group
APPGRP:
SYSD (Remote)
PGM3 Job:
Connect Default Activation
Group:
SQLPackage
for PGM3
RV2W577-3
SYSA SYSB
Job: Job:
Default ActivationGroup:
Connect
Job:
System-Named
Connect
Activation Group:
RV2W578-2
SQL will end any active connections in the default activation group when SQL
becomes not active. SQL becomes not active when:
v The application requester detects the first active SQL program for the process
has ended and the following are all true:
– There are no pending SQL changes
– There are no connections using protected connections
– A SET TRANSACTION statement is not active
– No programs that were precompiled with CLOSQLCSR(*ENDJOB) were run.
For a distributed program, the implicit SQL connection is made to the relational
database specified on the RDB parameter. For a nondistributed program, the
implicit SQL connection is made to the local relational database.
Distributed Support
DB2 UDB for AS/400 supports two levels of distributed relational database:
v Remote unit of work (RUW)
Remote unit of work is where the preparation and running of SQL statements
occurs at only one application server during a unit of work. An activation group
with an application process at an application requester can connect to an
application server and, within one or more units of work, run any number of static
or dynamic SQL statements that refer to objects on the application server.
Remote unit of work is also referred to as DRDA level 1.
The sync point manager is a system component that coordinates commit and
rollback operations among the participants in the two-phase commit protocol.
When running distributed updates, the sync point managers on the different
systems cooperate to ensure that resources reach a consistent state. The
protocols and flows used by sync point managers are also referred to as
two-phase commit protocols.
The type of data transport protocols used between systems affects whether the
network connection is protected or unprotected. In OS/400 V4R2, TCP/IP
connections are always unprotected; thus they can participate in a distributed unit
of work in only a limited way.
For example, if the first connection made from the program is to an AS/400 over
TCP/IP, updates can be performed over it, but any subsequent connections, even
over APPC, will be read only.
Note that when using Interactive SQL, the first SQL connection is to the local
system. Therefore in order to make updates to a remote system using TCP/IP,
you must do a RELEASE ALL followed by a COMMIT to end all SQL connections
before doing the CONNECT TO remote-tcp-system.
For more information on two-phase and one-phase resources, see the Backup and
Recovery book.
The following table summarizes the type of connection that will result for remote
distributed unit of work connections. SQLERRD(4) is set on successful CONNECT
and SET CONNECTION statements.
Table 74. Summary of Connection Type
Other
Application Application Updateable
Connect under Server Supports Server Supports One-phase
Commitment Two-phase Distributed Unit Resource
Control Commit of Work Registered SQLERRD(4)
No No No No 2
No No No Yes 2
Furthermore, when protected connections are inactive and the DDMCNV job
attribute is *KEEP, these unused DDM connections will also cause the CONNECT
statements in programs compiled with RUW connection management to fail.
If running with RUW connection management and using the job-level commitment
definition, then there are some restrictions.
v If the job-level commitment definition is used by more than one activation group,
all RUW connections must be to the local relational database.
v If the connection is remote, only one activation group may use the job-level
commitment definition for RUW connections.
Ending Connections
Because remote connections use resources, connections that are no longer going
to be used should be ended as soon as possible. Connections can be ended
implicitly or explicitly. For a description of when connections are implicitly ended see
“Implicit Connection Management for the Default Activation Group” on page 565 and
“Implicit Connection Management for Nondefault Activation Groups” on page 566.
Connections can be explicitly ended by either the DISCONNECT statement or the
RELEASE statement followed by a successful COMMIT. The DISCONNECT
statement can only be used with connections that use unprotected connections or
with local connections. The DISCONNECT statement will end the connection when
the statement is run. The RELEASE statement can be used with either protected or
unprotected connections. When the RELEASE statement is run, the connection is
not ended but instead placed into the released state. A connection that is in the
release stated can still be used. The connection is not ended until a successful
COMMIT is run. A ROLLBACK or an unsuccessful COMMIT will not end a
connection in the released state.
The Reclaim DDM connections (RCLDDMCNV) command may be used to end all
unused connections.
....
EXEC SQL WHENEVER SQLERROR GO TO done;
EXEC SQL WHENEVER NOT FOUND GO TO done;
....
EXEC SQL
DECLARE C1 CURSOR WITH HOLD FOR
SELECT PARTNO, PRICE
FROM PARTS
WHERE SITES_UPDATED = 'N'
FOR UPDATE OF SITES_UPDATED;
/* Connect to the systems */
EXEC SQL CONNECT TO LOCALSYS;
EXEC SQL CONNECT TO SYSB;
EXEC SQL CONNECT TO SYSC;
/* Make the local system the current connection */
EXEC SQL SET CONNECTION LOCALSYS;
/* Open the cursor */
EXEC SQL OPEN C1;
In this program, there are 3 application servers active: LOCALSYS which the local
system, and 2 remote systems, SYSB and SYSC. SYSB and SYSC also support
distributed unit of work and two-phase commit. Initially all connections are made
active by using the CONNECT statement for each of the application servers
involved in the transaction. When using DUW, a CONNECT statement does not
disconnect the previous connection, but instead places the previous connection in
...
EXEC SQL
SET CONNECTION SYS5
END-EXEC.
...
* Check if the connection is updateable.
EXEC SQL CONNECT END-EXEC.
* If connection is updateable, update sales information otherwise
* inform the user.
IF SQLERRD(3) = 1 THEN
EXEC SQL
INSERT INTO SALES_TABLE
VALUES(:SALES-DATA)
END-EXEC
ELSE
DISPLAY 'Unable to update sales information at this time'.
...
The following distributed unit of work example shows how the same cursor name is
opened in two different connections, resulting in two instances of cursor C1.
These calls allow the ARD program to pass the SQL statements and information
about the statements to a remote relational database and return results back to the
system. The system then returns the results to the application or the user. Access
to relational databases accessed by ARD programs appears like access to DRDA
application servers in the unlike environment.
For more information about application requester driver programs, see the System
API Reference.
Note: Not all negative SQLCODEs are dumped; only those that can be used to
produce an APAR are dumped. For more information on handling problems
on distributed relational database operations, see the Distributed Database
Problem Determination Guide
EMP_ACT
EMPNO PROJNO ACTNO EMPTIME EMSTDATE EMENDATE
000010 AD3100 10 .50 1982-01-01 1982-07-01
000070 AD3110 10 1.00 1982-01-01 1983-02-01
000230 AD3111 60 1.00 1982-01-01 1982-03-15
000230 AD3111 60 .50 1982-03-15 1982-04-15
000230 AD3111 70 .50 1982-03-15 1982-10-15
000230 AD3111 80 .50 1982-04-15 1982-10-15
000230 AD3111 180 1.00 1982-10-15 1983-01-01
000240 AD3111 70 1.00 1982-02-15 1982-09-15
000240 AD3111 80 1.00 1982-09-15 1983-01-01
000250 AD3112 60 1.00 1982-01-01 1982-02-01
000250 AD3112 60 .50 1982-02-01 1982-03-15
000250 AD3112 60 .50 1982-12-01 1983-01-01
000250 AD3112 60 1.00 1983-01-01 1983-02-01
000250 AD3112 70 .50 1982-02-01 1982-03-15
000250 AD3112 70 1.00 1982-03-15 1982-08-15
000250 AD3112 70 .25 1982-08-15 1982-10-15
000250 AD3112 80 .25 1982-08-15 1982-10-15
000250 AD3112 80 .50 1982-10-15 1982-12-01
000250 AD3112 180 .50 1982-08-15 1983-01-01
000260 AD3113 70 .50 1982-06-15 1982-07-01
000260 AD3113 70 1.00 1982-07-01 1983-02-01
000260 AD3113 80 1.00 1982-01-01 1982-03-01
000260 AD3113 80 .50 1982-03-01 1982-04-15
000260 AD3113 180 .50 1982-03-01 1982-04-15
000260 AD3113 180 1.00 1982-04-15 1982-06-01
000260 AD3113 180 .50 1982-06-01 1982-07-01
000270 AD3113 60 .50 1982-03-01 1982-04-01
000270 AD3113 60 1.00 1982-04-01 1982-09-01
000270 AD3113 60 .25 1982-09-01 1982-10-15
000270 AD3113 70 .75 1982-09-01 1982-10-15
000270 AD3113 70 1.00 1982-10-15 1983-02-01
000270 AD3113 80 1.00 1982-01-01 1982-03-01
000270 AD3113 80 .50 1982-03-01 1982-04-01
000030 IF1000 10 .50 1982-06-01 1983-01-01
000130 IF1000 90 1.00 1982-01-01 1982-10-01
000130 IF1000 100 .50 1982-10-01 1983-01-01
PROJECT
PROJNO PROJNAME DEPTNO RESPEMP PRSTAFF PRSTDATE PRENDATE MAJPROJ
AD3100 ADMIN SERVICES D01 000010 6.5 1982-01-01 1983-02-01 ?
AD3110 GENERAL ADMIN D21 000070 6 1982-01-01 1983-02-01 AD3100
SYSTEMS
AD3111 PAYROLL D21 000230 2 1982-01-01 1983-02-01 AD3110
PROGRAMMING
AD3112 PERSONNEL D21 000250 1 1982-01-01 1983-02-01 AD3110
PROGRAMMING
AD3113 ACCOUNT D21 000270 2 1982-01-01 1983-02-01 AD3110
PROGRAMMING
IF1000 QUERY C01 000030 2 1982-01-01 1983-02-01 ?
SERVICES
IF2000 USER C01 000030 1 1982-01-01 1983-02-01 ?
EDUCATION
MA2100 WELD LINE D01 000010 12 1982-01-01 1983-02-01 ?
AUTOMATION
MA2110 WL D11 000060 9 1982-01-01 1983-02-01 MA2100
PROGRAMMING
MA2111 W L PROGRAM D11 000220 2 1982-01-01 1982-12-01 MA2110
DESIGN
This appendix lists SQLCODEs and their associated SQLSTATEs. There are many
other SQL messages, but they are not listed here. Detailed descriptions of all DB2
UDB for AS/400 messages, including SQLCODEs, are available on-line and can be
displayed and printed from the Display Message Description display. You can
access this display by using the CL command Display Message Description
(DSPMSGD).
| If you wish, you can quickly reference “Positive SQLCODEs” on page 589 or
| “Negative SQLCODEs” on page 591.
If SQL encounters an error while processing the statement, the first characters of
the SQLSTATE are not '00', '01' or '02', and the SQLCODE is a negative number. If
SQL encounters a warning but valid condition while processing your statement, the
SQLCODE is a positive number and bytes one and two of the SQLSTATE are '01'.
If your SQL statement is processed without encountering an error or warning
condition, the SQLCODE returned is 0 and SQLSTATE is '00000'.
An application can also send the SQL message corresponding to any SQLCODE to
the job log by specifying the message ID and the replacement text on the CL
commands Retrieve Message (RTVMSG), Send Program Message
(SNDPGMMSG), and Send User Message (SNDUSRMSG).
For a list of SQLSTATEs that are used by the DB2 family of products, see IBM SQL
Reference, Version 2, SC26-8416. Also available on CD-ROM as a part of the
Transaction Processing Collection Kit CD-ROM, SK2T-0730-11.
When an SQLSTATE other than '00000' is returned from a non-DB2 UDB for
AS/400 application server, DB2 UDB for AS/400 attempts to map the SQLSTATE to
a DB2 UDB for AS/400 SQLCODE and message:
v If the SQLSTATE is not recognized by DB2 UDB for AS/400, the common
message for the class is issued.
Positive SQLCODEs
SQL0191 SQLCODE +191 SQLSTATE 01547 SQL0460 SQLCODE +460 SQLSTATE 01593
Explanation: MIXED data not properly formed. Explanation: Truncation of data may have occurred
for ALTER TABLE in &1 of &2.
SQL0204 SQLCODE +204 SQLSTATE 01532
SQL0551 SQLCODE +551 SQLSTATE 01548
Explanation: Object &1 in &2 type *&3 not found.
Explanation: Not authorized to object &1 in &2 type
*&3.
| SQL0237 SQLCODE +237 SQLSTATE 01005
| Explanation: Not enough SQLVAR entries were
SQL0552 SQLCODE +552 SQLSTATE 01542
| provided in the SQLDA.
Explanation: Not authorized to &1.
| SQL0239 SQLCODE +239 SQLSTATE 01005
SQL0569 SQLCODE +569 SQLSTATE 01006
| Explanation: Not enough SQLVAR entries were
| provided in the SQLDA. Explanation: Not all requested privileges revoked from
object &1 in &2 type &3.
SQL0304 SQLCODE +304 SQLSTATE 01515,
01547, 01565 SQL0570 SQLCODE +570 SQLSTATE 01007
Explanation: Conversion error in assignment to host Explanation: Not all requested privileges to object &1
variable &2. in &2 type &3 granted.
SQL0326 SQLCODE +326 SQLSTATE 01557 SQL0595 SQLCODE +595 SQLSTATE 01526
Explanation: Too many host variables specified. Explanation: Commit level &1 escalated to &2 lock.
SQL0331 SQLCODE +331 SQLSTATE 01520 SQL0596 SQLCODE +596 SQLSTATE 01002
Explanation: Characters conversion cannot be Explanation: Error occurred during disconnect.
performed.
SQL0645 SQLCODE +645 SQLSTATE 01528
SQL0335 SQLCODE +335 SQLSTATE 01517
Explanation: WHERE NOT NULL clause ignored for
Explanation: Characters conversion has resulted in index &1 in &2.
substitution characters.
SQL0802 SQLCODE +802 SQLSTATE 01519,
| SQL0360 SQLCODE +360 SQLSTATE 01627 01547, 01564, 01565
| Explanation: Datalink in table &1 in &2 may not be Explanation: Data conversion or data mapping error.
| valid due to pending links.
SQL0863 SQLCODE +863 SQLSTATE 01539
SQL0403 SQLCODE +403 SQLSTATE 01522
Explanation: Mixed or DBCS CCSID not supported by
Explanation: Alias &1 in &2 created but table or view relational database &1.
not found.
SQL0990 SQLCODE +990 SQLSTATE 01587
SQL0420 SQLCODE +420 SQLSTATE 01565
Explanation: Outcome unknown for the unit of work.
Explanation: Character in CAST argument not valid.
Negative SQLCODEs
SQL0007 SQLCODE -07 SQLSTATE 42601 SQL0102 SQLCODE -102 SQLSTATE 54002
Explanation: Character &1 (HEX &2) not valid in SQL Explanation: String constant beginning with &1 too
statement. long.
SQL0010 SQLCODE -10 SQLSTATE 42603 SQL0103 SQLCODE -103 SQLSTATE 42604
Explanation: String constant beginning &1 not Explanation: Numeric constant &1 not valid.
delimited.
SQL0104 SQLCODE -104 SQLSTATE 42601
SQL0029 SQLCODE -29 SQLSTATE 42601
Explanation: Token &1 was not valid. Valid tokens:
Explanation: INTO clause missing from embedded &2.
SELECT statement.
SQL0105 SQLCODE -105 SQLSTATE 42604
SQL0051 SQLCODE -51 SQLSTATE 3C000
Explanation: Mixed or graphic string constant not
Explanation: Cursor or procedure &1 previously valid.
declared.
SQL0106 SQLCODE -106 SQLSTATE 42611
SQL0060 SQLCODE -60 SQLSTATE 42815
Explanation: Precision specified for FLOAT column
Explanation: Value &3 for argument &1 of &2 function not valid.
not valid.
SQL0107 SQLCODE -107 SQLSTATE 42622
SQL0080 SQLCODE -80 SQLSTATE 42978
Explanation: &1 too long. Maximum &2 characters.
Explanation: Indicator variable &1 not SMALLINT
type.
SQL0109 SQLCODE -109 SQLSTATE 42601
Explanation: &1 clause not allowed.
SQL0084 SQLCODE -84 SQLSTATE 42612
Explanation: SQL statement not allowed.
SQL0110 SQLCODE -110 SQLSTATE 42606
Explanation: Hexadecimal constant beginning with &1
SQL0090 SQLCODE -90 SQLSTATE 42618
not valid.
Explanation: Host variable not permitted here.
SQL0112 SQLCODE -112 SQLSTATE 42607
| SQL0097 SQLCODE -97 SQLSTATE 42601
Explanation: Argument of function &1 is another
| Explanation: Use of data type not valid. function.
SQL0099 SQLCODE -99 SQLSTATE 42992 SQL0113 SQLCODE -113 SQLSTATE 28000,
2E000, 42602
Explanation: Operator in join condition not valid.
Explanation: Name &1 not allowed.
SQL0101 SQLCODE -101 SQLSTATE 54001,
54010, 54011 SQL0114 SQLCODE -114 SQLSTATE 42961
Explanation: SQL statement too long or complex. Explanation: Relational database &1 not the same as
current server &2.
SQL0117 SQLCODE -117 SQLSTATE 42802 SQL0133 SQLCODE -133 SQLSTATE 42906
Explanation: Statement inserts wrong number of Explanation: Operator on correlated column in SQL
values. function not valid.
SQL0118 SQLCODE -118 SQLSTATE 42902 SQL0134 SQLCODE -134 SQLSTATE 42907
Explanation: Table &1 in &2 also specified in a FROM Explanation: Argument of function too long.
clause.
SQL0136 SQLCODE -136 SQLSTATE 54005
SQL0119 SQLCODE -119 SQLSTATE 42803
Explanation: ORDER BY or GROUP BY columns too
Explanation: Column &1 in HAVING clause not in long.
GROUP BY.
SQL0137 SQLCODE -137 SQLSTATE 54006
SQL0120 SQLCODE -120 SQLSTATE 42903
Explanation: Result too long.
Explanation: Use of column function &2 not valid.
SQL0138 SQLCODE -138 SQLSTATE 22011
SQL0121 SQLCODE -121 SQLSTATE 42701
Explanation: Argument &1 of SUBSTR function not
Explanation: Duplicate column name &1 in INSERT or valid.
UPDATE.
SQL0144 SQLCODE -144 SQLSTATE 58003
SQL0122 SQLCODE -122 SQLSTATE 42803
Explanation: Section number not valid.
Explanation: Column specified in SELECT list not
valid.
SQL0145 SQLCODE -145 SQLSTATE 55005
Explanation: Recursion not supported for an
SQL0125 SQLCODE -125 SQLSTATE 42805
application server other than the AS/400 system.
Explanation: ORDER BY column number &1 not
valid.
SQL0150 SQLCODE -150 SQLSTATE 42807
Explanation: View or logical file &1 in &2 read-only.
SQL0128 SQLCODE -128 SQLSTATE 42601
Explanation: Use of NULL is not valid.
SQL0151 SQLCODE -151 SQLSTATE 42808
Explanation: Column &1 in table &2 in &3 read-only.
SQL0129 SQLCODE -129 SQLSTATE 54004
Explanation: Too many tables in SQL statement.
SQL0152 SQLCODE -152 SQLSTATE 42809
Explanation: Constraint type not valid for constraint
SQL0130 SQLCODE -130 SQLSTATE 22019,
&1 in &2.
22025
Explanation: Escape character &1 or LIKE pattern not
SQL0153 SQLCODE -153 SQLSTATE 42908
valid.
Explanation: Column list required for CREATE VIEW.
SQL0131 SQLCODE -131 SQLSTATE 42818
SQL0154 SQLCODE -154 SQLSTATE 42909
Explanation: Operands of LIKE not compatible or not
valid. Explanation: UNION and UNION ALL for CREATE
VIEW not valid.
SQL0159 SQLCODE -159 SQLSTATE 42809 SQL0188 SQLCODE -188 SQLSTATE 22503,
28000, 2E000
Explanation: &1 in &2 not correct type.
Explanation: &1 is not a valid string representation of
an authorization name or a relational database name.
SQL0160 SQLCODE -160 SQLSTATE 42813
Explanation: WITH CHECK OPTION not allowed for
SQL0189 SQLCODE -189 SQLSTATE 22522
view &1 in &2.
Explanation: Coded Character Set Identifier &1 is not
valid.
SQL0161 SQLCODE -161 SQLSTATE 44000
Explanation: INSERT/UPDATE not allowed due to
SQL0190 SQLCODE -190 SQLSTATE 42837
WITH CHECK OPTION.
Explanation: Attributes of column &3 in &1 in &2 not
compatible.
SQL0170 SQLCODE -170 SQLSTATE 42605
Explanation: Number of arguments for function &1 not
SQL0191 SQLCODE -191 SQLSTATE 22504
valid.
Explanation: MIXED data not properly formed.
SQL0171 SQLCODE -171 SQLSTATE 42815
SQL0192 SQLCODE -192 SQLSTATE 42937
Explanation: Argument &1 of function &2 not valid.
Explanation: Argument of TRANSLATE function not
valid.
SQL0175 SQLCODE -175 SQLSTATE 58028
Explanation: COMMIT failed.
SQL0194 SQLCODE -194 SQLSTATE 42848
Explanation: KEEP LOCKS not allowed.
SQL0180 SQLCODE -180 SQLSTATE 22007
Explanation: Syntax of date, time, or timestamp value
SQL0195 SQLCODE -195 SQLSTATE 42814
not valid.
Explanation: Last column of &1 in &2 cannot be
dropped.
SQL0181 SQLCODE -181 SQLSTATE 22007
Explanation: Value in date, time, or timestamp string
SQL0196 SQLCODE -196 SQLSTATE 42817
not valid.
Explanation: Column &3 in &1 in &2 cannot be
dropped with RESTRICT.
SQL0182 SQLCODE -182 SQLSTATE 42816
Explanation: A date, time, or timestamp expression
SQL0197 SQLCODE -197 SQLSTATE 42877
not valid.
Explanation: Column &1 cannot be qualified.
SQL0206 SQLCODE -206 SQLSTATE 42703 SQL0255 SQLCODE -255 SQLSTATE 42999
Explanation: Column &1 not in specified tables. Explanation: DB2 Multisystem query error.
SQL0208 SQLCODE -208 SQLSTATE 42707 SQL0256 SQLCODE -256 SQLSTATE 42998
Explanation: ORDER BY column &1 not in results Explanation: Constraint &1 in &2 not allowed on
table. distributed file.
SQL0212 SQLCODE -212 SQLSTATE 42712 SQL0270 SQLCODE -270 SQLSTATE 42997
Explanation: Duplicate table designator &1 not valid. Explanation: Unique index not allowed.
Explanation: Current row deleted or moved for cursor Explanation: Host variable &1 not compatible with
&1. SELECT item.
SQL0227 SQLCODE -227 SQLSTATE 24513 SQL0304 SQLCODE -304 SQLSTATE 22003,
22023, 22504
Explanation: FETCH not valid, cursor &1 in unknown
position. Explanation: Conversion error in assignment to host
variable &2.
SQL0330 SQLCODE -330 SQLSTATE 22021 | SQL0357 SQLCODE -357 SQLSTATE 57050
Explanation: Character conversion cannot be | Explanation: File server &1 used in DataLink not
performed. | currently available.
SQL0331 SQLCODE -331 SQLSTATE 22021 | SQL0358 SQLCODE -358 SQLSTATE 428D1
Explanation: Character conversion cannot be | Explanation: Error &1 occurred using DataLink data
performed. | type.
SQL0332 SQLCODE -332 SQLSTATE 57017 | SQL0392 SQLCODE -392 SQLSTATE 42855
Explanation: Character conversion between CCSID | Explanation: Assignment of LOB to specified host
&1 and CCSID &2 not valid. | variable not allowed.
SQL0334 SQLCODE -334 SQLSTATE 22524 | SQL0398 SQLCODE -398 SQLSTATE 428D2
Explanation: Character conversion has resulted in | Explanation: AS LOCATOR cannot be specified for a
truncation. | non-LOB parameter.
SQL0338 SQLCODE -338 SQLSTATE 42972 SQL0401 SQLCODE -401 SQLSTATE 42818
Explanation: JOIN expression not valid. Explanation: Comparison operator &1 operands not
compatible.
SQL0404 SQLCODE -404 SQLSTATE 22001 SQL0421 SQLCODE -421 SQLSTATE 42826
Explanation: Value for column &1 too long. Explanation: Number of UNION operands not equal.
SQL0405 SQLCODE -405 SQLSTATE 42820 | SQL0423 SQLCODE -423 SQLSTATE 0F001
Explanation: Numeric constant &1 out of range. | Explanation: LOB locator &1 not valid.
SQL0406 SQLCODE -406 SQLSTATE 22003, SQL0428 SQLCODE -428 SQLSTATE 25501
22023, 22504
Explanation: SQL statement cannot be run.
Explanation: Conversion error on assignment to
column &2.
| SQL0429 SQLCODE -429 SQLSTATE 54028
SQL0415 SQLCODE -415 SQLSTATE 42825 SQL0442 SQLCODE -442 SQLSTATE 54023
Explanation: UNION operands not compatible. Explanation: Maximum # of parameters on CALL
exceeded.
SQL0417 SQLCODE -417 SQLSTATE 42609
SQL0443 SQLCODE -443 SQLSTATE 2Fxxx,
Explanation: Combination of parameter markers not
38501
valid.
Explanation: Trigger program or external procedure
detected on error.
SQL0418 SQLCODE -418 SQLSTATE 42610
Explanation: Use of parameter marker is not valid.
SQL0444 SQLCODE -444 SQLSTATE 42724
Explanation: External program &4 in &1 not found.
SQL0419 SQLCODE -419 SQLSTATE 42911
Explanation: Negative scale not valid.
SQL0448 SQLCODE -448 SQLSTATE 54023 SQL0469 SQLCODE -469 SQLSTATE 42886
Explanation: Maximum parameters on DECLARE Explanation: IN, OUT, INOUT not valid for parameter
PROCEDURE exceeded. &4 in procedure &1 in &2.
SQL0449 SQLCODE -449 SQLSTATE 42878 SQL0470 SQLCODE -470 SQLSTATE 39002
Explanation: External program name for procedure &1 Explanation: NULL values not allowed for parameter
in &2 not valid. &4 in procedure.
SQL0451 SQLCODE -451 SQLSTATE 42815 | SQL0473 SQLCODE -473 SQLSTATE 42918
Explanation: Attributes of parameter &1 not valid for | Explanation: User-defined type &1 cannot be created.
procedure.
| SQL0475 SQLCODE -475 SQLSTATE 42866
| SQL0452 SQLCODE -452 SQLSTATE 428A1
| Explanation: RETURNS data type for function &3 in
| Explanation: Unable to access a file that is referred to | &4 not valid.
| by a file reference variable.
| SQL0476 SQLCODE -476 SQLSTATE 42725
| SQL0453 SQLCODE -453 SQLSTATE 42880
| Explanation: Function &1 in &2 not unique.
| Explanation: Return type for function &1 in &2 not
| compatible with CAST TO type.
| SQL0478 SQLCODE -478 SQLSTATE 42893
SQL0501 SQLCODE -501 SQLSTATE 24501 SQL0518 SQLCODE -518 SQLSTATE 07003
Explanation: Cursor &1 not open. Explanation: Prepared statement &1 not found.
SQL0502 SQLCODE -502 SQLSTATE 24502 SQL0519 SQLCODE -519 SQLSTATE 24506
Explanation: Cursor &1 already open. Explanation: Prepared statement &2 in use.
SQL0503 SQLCODE -503 SQLSTATE 42912 SQL0520 SQLCODE -520 SQLSTATE 42828
Explanation: Column &3 cannot be updated. Explanation: Cannot UPDATE or DELETE on cursor
&1.
SQL0504 SQLCODE -504 SQLSTATE 34000
SQL0525 SQLCODE -525 SQLSTATE 51015
Explanation: Cursor &1 not declared.
Explanation: Statement not valid on application
server.
SQL0507 SQLCODE -507 SQLSTATE 24501
Explanation: Cursor &1 not open.
SQL0527 SQLCODE -527 SQLSTATE 42874
Explanation: ALWCPYDTA(*NO) specified but
SQL0508 SQLCODE -508 SQLSTATE 24504
temporary result required for &1.
Explanation: Cursor &1 not positioned on locked row.
SQL0530 SQLCODE -530 SQLSTATE 23503
SQL0509 SQLCODE -509 SQLSTATE 42827
Explanation: Insert or UPDATE value not allowed by
Explanation: Table &2 in &3 not same as table in referential constraint.
cursor &1.
SQL0531 SQLCODE -531 SQLSTATE 23001,
SQL0510 SQLCODE -510 SQLSTATE 42828 23504
Explanation: Cursor &1 for file &2 is read-only. Explanation: Update prevented by referential
constraint.
SQL0539 SQLCODE -539 SQLSTATE 42888 SQL0579 SQLCODE -579 SQLSTATE 38004,
2F004
Explanation: Table does not have primary key.
Explanation: Reading SQL data not permitted.
SQL0541 SQLCODE -541 SQLSTATE 42891
SQL0580 SQLCODE -580 SQLSTATE 42625
Explanation: Duplicate UNIQUE constraint already
exists. Explanation: At least one result in CASE expression
must be not NULL.
SQL0543 SQLCODE -543 SQLSTATE 23511
SQL0581 SQLCODE -581 SQLSTATE 42804
Explanation: Constraint &1 conflicts with SET NULL
or SET DEFAULT rule. Explanation: The results in a CASE expression are
not compatible.
SQL0544 SQLCODE -544 SQLSTATE 23512
| SQL0583 SQLCODE -583 SQLSTATE 42845
Explanation: CHECK constraint &1 cannot be added.
| Explanation: Use of function &1 in &2 not valid.
SQL0545 SQLCODE -545 SQLSTATE 23513
| SQL0585 SQLCODE -585 SQLSTATE 42732
Explanation: INSERT or UPDATE not allowed by
CHECK constraint. | Explanation: Library &1 is used incorrectly on the
| SET PATH statement
SQL0546 SQLCODE -546 SQLSTATE 42621
SQL0590 SQLCODE -590 SQLSTATE 42734
Explanation: CHECK condition of constraint &1 not
valid. Explanation: Name &1 specified in &2 not unique.
SQL0551 SQLCODE -551 SQLSTATE 42501 SQL0601 SQLCODE -601 SQLSTATE 42710
Explanation: Not authorized to object &1 in &2 type Explanation: Object &1 in &2 type *&3 already exists.
*&3.
SQL0602 SQLCODE -602 SQLSTATE 54008
SQL0552 SQLCODE -552 SQLSTATE 42502
Explanation: More than 120 columns specified for
Explanation: Not authorized to &1. CREATE INDEX.
SQL0557 SQLCODE -557 SQLSTATE 42852 SQL0603 SQLCODE -603 SQLSTATE 23515
Explanation: Privilege not valid for table or view &1 in Explanation: Unique index cannot be created
&2. because of duplicate keys.
SQL0573 SQLCODE -573 SQLSTATE 42890 SQL0604 SQLCODE -604 SQLSTATE 42611
Explanation: Table does not have matching parent Explanation: Attributes of column not valid.
key.
SQL0607 SQLCODE -607 SQLSTATE 42832
SQL0574 SQLCODE -574 SQLSTATE 42894
Explanation: Operation not allowed on system table
Explanation: Default value not valid. &1 in &2.
SQL0782 SQLCODE -782 SQLSTATE 428D7 SQL0842 SQLCODE -842 SQLSTATE 08002
Explanation: Condition value &1 specified in handler Explanation: Connection already exists.
not valid.
SQL0784 SQLCODE -784 SQLSTATE 42860 Explanation: Cannot disconnect relational database
due to LU 6.2 protected conversation.
Explanation: Check constraint &1 cannot be dropped.
SQL0862 SQLCODE -862 SQLSTATE 55029
SQL0785 SQLCODE -785 SQLSTATE 428D8
Explanation: Local program attempted to connect to a
Explanation: Use of SQLCODE or SQLSTATE not remote relational database.
valid.
SQL0901 SQLCODE -901 SQLSTATE 58004 SQL5001 SQLCODE -5001 SQLSTATE 42703
Explanation: SQL system error. Explanation: Column qualifier &2 undefined.
SQL0904 SQLCODE -904 SQLSTATE 57011 SQL5002 SQLCODE -5002 SQLSTATE 42812
Explanation: Resource limit exceeded. Explanation: Collection must be specified for table &1.
SQL0906 SQLCODE -906 SQLSTATE 24514 SQL5003 SQLCODE -5003 SQLSTATE 42922
Explanation: Operation not performed because of Explanation: Cannot perform operation under
previous error. commitment control.
SQL0907 SQLCODE -907 SQLSTATE 27000 SQL5005 SQLCODE -5005 SQLSTATE 42815
Explanation: Attempt to change same row twice. Explanation: Operator &4 not consistent with
operands.
SQL0910 SQLCODE -910 SQLSTATE 57007
SQL5012 SQLCODE -5012 SQLSTATE 42618
Explanation: Object &1 in &2 type *&3 has a pending
change. Explanation: Host variable not a numeric with zero
scale.
SQL0913 SQLCODE -913 SQLSTATE 57033
SQL5016 SQLCODE -5016 SQLSTATE 42833
Explanation: Row or object &1 in &2 type *&3 in use.
Explanation: Object name &1 not valid for naming
option.
SQL0917 SQLCODE -917 SQLSTATE 42969
Explanation: Package not created.
SQL5021 SQLCODE -5021 SQLSTATE 42930
Explanation: FOR UPDATE OF column &1 also in
SQL0918 SQLCODE -918 SQLSTATE 51021
ORDER BY.
Explanation: Rollback required.
SQL5023 SQLCODE -5023 SQLSTATE 26510
SQL0950 SQLCODE -950 SQLSTATE 42705
Explanation: Duplicate statement name in DECLARE
Explanation: Relational database &1 not in relational CURSOR.
database directory.
SQL5024 SQLCODE -5024 SQLSTATE 42618
SQL0951 SQLCODE -951 SQLSTATE 55007
Explanation: Host variable &1 not character.
Explanation: Object &1 in &2 not altered. It is in use.
SQL5047 SQLCODE -5047 SQLSTATE 42616
SQL0952 SQLCODE -952 SQLSTATE 57014
Explanation: Error processing SRTSEQ or LANGID
Explanation: Processing of the SQL statement ended parameter.
by ENDRDBRQS command.
SQL5051 SQLCODE -5051 SQLSTATE 42875
SQL0969 SQLCODE -969 SQLSTATE 58033
Explanation: Incorrect qualifier.
Explanation: Unexpected client driver error.
SQL7002 SQLCODE -7002 SQLSTATE 42847 SQL7027 SQLCODE -7027 SQLSTATE 42984
Explanation: Override parameter not valid. Explanation: Unable to grant to a view.
SQL7003 SQLCODE -7003 SQLSTATE 42857 SQL7028 SQLCODE -7028 SQLSTATE 42944
Explanation: File &1 in &2 has more than one format. Explanation: Unable to CHGOBJOWN for primary
group.
SQL7006 SQLCODE -7006 SQLSTATE 55018
SQL7029 SQLCODE -7029 SQLSTATE 428B8
Explanation: Cannot drop collection &1.
Explanation: New name &3 is not valid.
SQL7007 SQLCODE -7007 SQLSTATE 51009
SQL7031 SQLCODE -7031 SQLSTATE 54044
Explanation: COMMIT or ROLLBACK not valid.
Explanation: Sort sequence table &1 too long.
SQL7008 SQLCODE -7008 SQLSTATE 55019
SQL7032 SQLCODE -7032 SQLSTATE 42904
Explanation: &1 in &2 not valid for operation.
Explanation: SQL procedure &1 in &2 not created.
SQL7010 SQLCODE -7010 SQLSTATE 42850
SQL7033 SQLCODE -7033 SQLSTATE 42923
Explanation: Logical file &1 in &2 not valid for
CREATE VIEW. Explanation: Alias name &1 in &2 not allowed.
SQL7011 SQLCODE -7011 SQLSTATE 42851 | SQL7034 SQLCODE -7034 SQLSTATE 42926
Explanation: &1 in &2 not table, view, or physical file. | Explanation: LOB locators are not allowed with
| COMMIT(*NONE).
SQL7017 SQLCODE -7017 SQLSTATE 42971
| SQL7037 SQLCODE -7037 SQLSTATE 42835
Explanation: Commitment control is already active to
a DDM target. | Explanation: Data in a distributed file &1 in &2 cannot
| be redistributed.
SQL7018 SQLCODE -7018 SQLSTATE 42970
SQL7941 SQLCODE -7941 SQLSTATE 42981
Explanation: COMMIT HOLD or ROLLBACK HOLD
not allowed. Explanation: Application process not at commit
boundary.
SQL7021 SQLCODE -7021 SQLSTATE 57043
SQL9012 SQLCODE -9012 SQLSTATE 42968
Explanation: Local program attempting to run on
application server. Explanation: DB2 UDB Query Manager and SQL
Development Kit not available.
SQL7022 SQLCODE -7022 SQLSTATE 42977
SQ30000 SQLCODE -30000 SQLSTATE 58008
Explanation: User &1 not the same as current user
&2 for connect to local relational database. Explanation: Distributed Relational Database
Architecture (DRDA) protocol error.
SQL7024 SQLCODE -7024 SQLSTATE 42876
SQ30001 SQLCODE -30001 SQLSTATE 57042
Explanation: Index cannot be created because of
CCSID incompatibility. Explanation: Call to distributed SQL program not
allowed.
SQ30021 SQLCODE -30021 SQLSTATE 58010 SQ30073 SQLCODE -30073 SQLSTATE 58017
Explanation: Distributed relational database not Explanation: Distributed Data Management (DDM)
supported by the remote system. parameter value &1 not supported.
SQ30040 SQLCODE -30040 SQLSTATE 57012 SQ30074 SQLCODE -30074 SQLSTATE 58018
Explanation: DDM resource &2 at relational database Explanation: Distributed Data Management (DDM)
&1 not available. reply message &1 not supported.
SQ30041 SQLCODE -30041 SQLSTATE 57013 SQ30080 SQLCODE -30080 SQLSTATE 08001
Explanation: DDM resources at relational database Explanation: Communication error occurred during
&1 not available. distributed database processing.
SQ30050 SQLCODE -30050 SQLSTATE 58011 SQ30089 SQLCODE -30089 SQLSTATE 08001
Explanation: DDM command &1 is not valid while Explanation: Communication error occurred during
bind process is in progress. DB2 Multisystem processing.
SQ30051 SQLCODE -30051 SQLSTATE 58012 SQ30090 SQLCODE -30090 SQLSTATE 25000,
2D528, 2D529
Explanation: Bind process for specified package
name and consistency token not active. Explanation: Change request not valid for read-only
application server.
SQ30052 SQLCODE -30052 SQLSTATE 42932
Explanation: Program preparation assumptions not
correct.
Each sample program produces the same report, which is shown at the end of this
appendix. The first part of the report shows, by project, all employees working on
the project who received a raise. The second part of the report shows the new
salary expense for each project.
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 607
5769ST1 V4R4M0 990521 Create SQL ILE C Object CEX 04/01/98 15:52:26 Page 2
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
1 #include "string.h" 100
2 #include "stdlib.h" 200
3 #include "stdio.h" 300
4 400
5 main() 500
6 { 600
7 /* A sample program which updates the salaries for those employees */ 700
8 /* whose current commission total is greater than or equal to the */ 800
9 /* value of 'commission'. The salaries of those who qualify are */ 900
10 /* increased by the value of 'percentage' retroactive to 'raise_date'*/ 1000
11 /* A report is generated showing the projects which these employees */ 1100
12 /* have contributed to ordered by project number and employee ID. */ 1200
13 /* A second report shows each project having an end date occurring */ 1300
14 /* after 'raise_date' (is potentially affected by the retroactive */ 1400
15 /* raises) with its total salary expenses and a count of employees */ 1500
16 /* who contributed to the project. */ 1600
17 1700
18 short work_days = 253; /* work days during in one year */ 1800
19 float commission = 2000.00; /* cutoff to qualify for raise */ 1900
20 float percentage = 1.04; /* raised salary as percentage */ 2000
21 char raise_date??(12??) = "1982-06-01"; /* effective raise date */ 2100
22 2200
23 /* File declaration for qprint */ 2300
24 FILE *qprint; 2400
25 2500
26 /* Structure for report 1 */ 2600
27 1
#pragma mapinc ("project","CORPDATA/PROJECT(PROJECT)","both","p z") 2700
28 #include "project" 2800
29 struct { 2900
30 CORPDATA_PROJECT_PROJECT_both_t Proj_struct; 3000
31 char empno??(7??); 3100
32 char name??(30??); 3200
33 float salary; 3300
34 } rpt1; 3400
35 3500
36 /* Structure for report 2 */ 3600
37 struct { 3700
38 char projno??(7??); 3800
39 char project_name??(37??); 3900
40 short employee_count; 4000
41 double total_proj_cost; 4100
42 } rpt2; 4200
43 4300
44 2
exec sql include SQLCA; 4400
45 4500
46 qprint=fopen("QPRINT","w"); 4600
47 4700
48 /* Update the selected projects by the new percentage. If an error */ 4800
49 /* occurs during the update, ROLLBACK the changes. */ 4900
50 3
EXEC SQL WHENEVER SQLERROR GO TO update_error; 5000
51 4
EXEC SQL 5100
52 UPDATE CORPDATA/EMPLOYEE 5200
53 SET SALARY = SALARY * :percentage 5300
54 WHERE COMM >= :commission ; 5400
55 5500
56 /* Commit changes */ 5600
57 5
EXEC SQL 5700
58 COMMIT; 5800
59 EXEC SQL WHENEVER SQLERROR GO TO report_error; 5900
60 6000
61 /* Report the updated statistics for each employee assigned to the */ 6100
62 /* selected projects. */ 6200
63 6300
64 /* Write out the header for Report 1 */ 6400
65 fprintf(qprint," REPORT OF PROJECTS AFFECTED \ 6500
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 609
5769ST1 V4R4M0 990521 Create SQL ILE C Object CEX 04/01/98 15:52:26 Page 4
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
131 fprintf(qprint,"\n%6s %-36s %6d %9.2f", 13100
132 rpt2.projno,rpt2.project_name,rpt2.employee_count, 13200
133 rpt2.total_proj_cost); 13300
134 } 13400
135 while (SQLCODE==0); 13500
136 13600
137 done2: 13700
138 EXEC SQL 13800
139 CLOSE C2; 13900
140 goto finished; 14000
141 14100
142 /* Error occured while updating table. Inform user and rollback */ 14200
143 /* changes. */ 14300
144 update_error: 14400
145 13
EXEC SQL WHENEVER SQLERROR CONTINUE; 14500
146 fprintf(qprint,"*** ERROR Occurred while updating table. SQLCODE=" 14600
147 "%5d\n",SQLCODE); 14700
148 14
EXEC SQL 14800
149 ROLLBACK; 14900
150 goto finished; 15000
151 15100
152 /* Error occured while generating reports. Inform user and exit. */ 15200
153 report_error: 15300
154 fprintf(qprint,"*** ERROR Occurred while generating reports. " 15400
155 "SQLCODE=%5d\n",SQLCODE); 15500
156 goto finished; 15600
157 15700
158 /* All done */ 15800
159 finished: 15900
160 fclose(qprint); 16000
161 exit(0); 16100
162 16200
163 } 16300
* * * * * E N D O F S O U R C E * * * * *
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 611
5769ST1 V4R4M0 990521 Create SQL ILE C Object CEX 04/01/98 15:52:26 Page 6
CROSS REFERENCE
EMPLOYEE **** TABLE IN CORPDATA
52 74 116
EMPLOYEE **** TABLE
75 118
EMPNO **** COLUMN IN EMP_ACT
72 75 76 118
EMPNO **** COLUMN IN EMPLOYEE
75 118
EMPNO 74 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.EMP_ACT
EMPNO 74 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
EMPTIME 74 DECIMAL(5,2) COLUMN IN CORPDATA.EMP_ACT
EMPTIME **** COLUMN
114
EMSTDATE 74 DATE(10) COLUMN IN CORPDATA.EMP_ACT
EMSTDATE **** COLUMN
114
FIRSTNME **** COLUMN
73
FIRSTNME 74 VARCHAR(12) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
HIREDATE 74 DATE(10) COLUMN IN CORPDATA.EMPLOYEE
JOB 74 CHARACTER(8) COLUMN IN CORPDATA.EMPLOYEE
LASTNAME **** COLUMN
73
LASTNAME 74 VARCHAR(15) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
MAJPROJ 27 VARCHAR(6) IN Proj_struct
MAJPROJ 116 CHARACTER(6) COLUMN IN CORPDATA.PROJECT
MIDINIT 74 CHARACTER(1) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
Proj_struct 30 STRUCTURE IN rpt1
PHONENO 74 CHARACTER(4) COLUMN IN CORPDATA.EMPLOYEE
PRENDATE 27 DATE(10) IN Proj_struct
PRENDATE **** COLUMN
119
PRENDATE 116 DATE(10) COLUMN IN CORPDATA.PROJECT
PROJECT **** TABLE IN CORPDATA
116
PROJECT **** TABLE
117
PROJNAME 27 VARCHAR(24) IN Proj_struct
PROJNAME **** COLUMN
113 120
PROJNAME 116 VARCHAR(24) COLUMN (NOT NULL) IN CORPDATA.PROJECT
PROJNO 27 VARCHAR(6) IN Proj_struct
85
PROJNO **** COLUMN
72 76
PROJNO 74 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.EMP_ACT
PROJNO **** COLUMN IN EMP_ACT
5769ST1 V4R4M0 990521 Create SQL ILE C Object CEX 04/01/98 15:52:26 Page 7
CROSS REFERENCE
113 117 120
PROJNO **** COLUMN IN PROJECT
117
PROJNO 116 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.PROJECT
PRSTAFF 27 DECIMAL(5,2) IN Proj_struct
PRSTAFF 116 DECIMAL(5,2) COLUMN IN CORPDATA.PROJECT
PRSTDATE 27 DATE(10) IN Proj_struct
PRSTDATE 116 DATE(10) COLUMN IN CORPDATA.PROJECT
RESPEMP 27 VARCHAR(6) IN Proj_struct
RESPEMP 116 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.PROJECT
SALARY **** COLUMN
53 53 73 115
SALARY 74 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
SEX 74 CHARACTER(1) COLUMN IN CORPDATA.EMPLOYEE
WORKDEPT 74 CHARACTER(3) COLUMN IN CORPDATA.EMPLOYEE
No errors found in source
163 Source records processed
* * * * * E N D O F L I S T I N G * * * * *
5769ST1 V4R4M0 990521 Create SQL COBOL Program CBLEX 04/01/98 11:09:13 Page 1
Source type...............COBOL
Program name..............CORPDATA/CBLEX
Source file...............CORPDATA/SRC
Member....................CBLEX
To source file............QTEMP/QSQLTEMP
Options...................*SRC *XREF
Target release............V4R4M0
INCLUDE file..............*LIBL/*SRCFILE
Commit....................*CHG
Allow copy of data........*YES
Close SQL cursor..........*ENDPGM
Allow blocking............*READ
Delay PREPARE.............*NO
Generation level..........10
Printer file..............*LIBL/QSYSPRT
Date format...............*JOB
Date separator............*JOB
Time format...............*HMS
Time separator ...........*JOB
Replace...................*YES
Relational database.......*LOCAL
User .....................*CURRENT
RDB connect method........*DUW
Default Collection........*NONE
Package name..............*PGMLIB/*PGM
Dynamic User Profile......*USER
User Profile..............*NAMING
Sort Sequence.............*JOB
Language ID...............*JOB
IBM SQL flagging..........*NOFLAG
ANS flagging..............*NONE
Text......................*SRCMBRTXT
Source file CCSID.........65535
Job CCSID.................65535
Source member changed on 07/01/96 09:44:58
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 613
5769ST1 V4R4M0 990521 Create SQL COBOL Program CBLEX 04/01/98 11:09:13 Page 2
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
1
2 ****************************************************************
3 * A sample program which updates the salaries for those *
4 * employees whose current commission total is greater than or *
5 * equal to the value of COMMISSION. The salaries of those who *
6 * qualify are increased by the value of PERCENTAGE retroactive *
7 * to RAISE-DATE. A report is generated showing the projects *
8 * which these employees have contributed to ordered by the *
9 * project number and employee ID. A second report shows each *
10 * project having an end date occurring after RAISE-DATE *
11 * (i.e. potentially affected by the retroactive raises ) with *
12 * its total salary expenses and a count of employees who *
13 * contributed to the project. *
14 ****************************************************************
15
16
17 IDENTIFICATION DIVISION.
18
19 PROGRAM-ID. CBLEX.
20 ENVIRONMENT DIVISION.
21 CONFIGURATION SECTION.
22 SOURCE-COMPUTER. IBM-AS400.
23 OBJECT-COMPUTER. IBM-AS400.
24 INPUT-OUTPUT SECTION.
25
26 FILE-CONTROL.
27 SELECT PRINTFILE ASSIGN TO PRINTER-QPRINT
28 ORGANIZATION IS SEQUENTIAL.
29
30 DATA DIVISION.
31
32 FILE SECTION.
33
34 FD PRINTFILE
35 BLOCK CONTAINS 1 RECORDS
36 LABEL RECORDS ARE OMITTED.
37 01 PRINT-RECORD PIC X(132).
38
39 WORKING-STORAGE SECTION.
40 77 WORK-DAYS PIC S9(4) BINARY VALUE 253.
41 77 RAISE-DATE PIC X(11) VALUE "1982-06-01".
42 77 PERCENTAGE PIC S999V99 PACKED-DECIMAL.
43 77 COMMISSION PIC S99999V99 PACKED-DECIMAL VALUE 2000.00.
44
45 ***************************************************************
46 * Structure for report 1. *
47 ***************************************************************
48
49 1
01 RPT1.
50 COPY DDS-PROJECT OF CORPDATA-PROJECT.
51 05 EMPNO PIC X(6).
52 05 NAME PIC X(30).
53 05 SALARY PIC S9(6)V99 PACKED-DECIMAL.
54
55
56 ***************************************************************
57 * Structure for report 2. *
58 ***************************************************************
59
60 01 RPT2.
61 15 PROJNO PIC X(6).
62 15 PROJECT-NAME PIC X(36).
63 15 EMPLOYEE-COUNT PIC S9(4) BINARY.
64 15 TOTAL-PROJ-COST PIC S9(10)V99 PACKED-DECIMAL.
65
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 615
5769ST1 V4R4M0 990521 Create SQL COBOL Program CBLEX 04/01/98 11:09:13 Page 4
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
131 WHENEVER SQLERROR GO TO E010-UPDATE-ERROR
132 END-EXEC.
133 4
EXEC SQL
134 UPDATE CORPDATA/EMPLOYEE
135 SET SALARY = SALARY * :PERCENTAGE
136 WHERE COMM >= :COMMISSION
137 END-EXEC.
138
139 ***************************************************************
140 * Commit changes. *
141 ***************************************************************
142
143 5
EXEC SQL
144 COMMIT
145 END-EXEC.
146
147 EXEC SQL
148 WHENEVER SQLERROR GO TO E020-REPORT-ERROR
149 END-EXEC.
150
151 ***************************************************************
152 * Report the updated statistics for each employee receiving *
153 * a raise and the projects that s/he participates in *
154 ***************************************************************
155
156 ***************************************************************
157 * Write out the header for Report 1. *
158 ***************************************************************
159
160 write print-record from rpt1-header1
161 before advancing 2 lines.
162 write print-record from rpt1-header2
163 before advancing 1 line.
164 6
exec sql
165 declare c1 cursor for
166 SELECT DISTINCT projno, emp_act.empno,
167 lastname||", "||firstnme ,salary
168 from corpdata/emp_act, corpdata/employee
169 where emp_act.empno =employee.empno and
170 comm >= :commission
171 order by projno, empno
172 end-exec.
173 7
EXEC SQL
174 OPEN C1
175 END-EXEC.
176
177 PERFORM B000-GENERATE-REPORT1 THRU B010-GENERATE-REPORT1-EXIT
178 UNTIL SQLCODE NOT EQUAL TO ZERO.
179
180 10
A100-DONE1.
181 EXEC SQL
182 CLOSE C1
183 END-EXEC.
184
185 *************************************************************
186 * For all projects ending at a date later than the RAISE- *
187 * DATE ( i.e. those projects potentially affected by the *
188 * salary raises generate a report containing the project *
189 * project number, project name, the count of employees *
190 * participating in the project and the total salary cost *
191 * for the project *
192 *************************************************************
193
194
195 ***************************************************************
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 617
5769ST1 V4R4M0 990521 Create SQL COBOL Program CBLEX 04/01/98 11:09:13 Page 6
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
261 ***************************************************************
262 * Fetch and write the rows to PRINTFILE. *
263 ***************************************************************
264
265 C000-GENERATE-REPORT2.
266 EXEC SQL
267 WHENEVER NOT FOUND GO TO A200-DONE2
268 END-EXEC.
269 12
EXEC SQL
270 FETCH C2 INTO :RPT2
271 END-EXEC.
272 MOVE CORRESPONDING RPT2 TO RPT2-DATA.
273 WRITE PRINT-RECORD FROM RPT2-DATA
274 BEFORE ADVANCING 1 LINE.
275
276 C010-GENERATE-REPORT2-EXIT.
277 EXIT.
278
279 ***************************************************************
280 * Error occured while updating table. Inform user and *
281 * rollback changes. *
282 ***************************************************************
283
284 E010-UPDATE-ERROR.
285 13
EXEC SQL
286 WHENEVER SQLERROR CONTINUE
287 END-EXEC.
288 MOVE SQLCODE TO CODE-EDIT.
289 STRING "*** ERROR Occurred while updating table. SQLCODE="
290 CODE-EDIT DELIMITED BY SIZE INTO PRINT-RECORD.
291 WRITE PRINT-RECORD.
292 14
EXEC SQL
293 ROLLBACK
294 END-EXEC.
295 STOP RUN.
296
297 ***************************************************************
298 * Error occured while generating reports. Inform user and *
299 * exit. *
300 ***************************************************************
301
302 E020-REPORT-ERROR.
303 MOVE SQLCODE TO CODE-EDIT.
304 STRING "*** ERROR Occurred while generating reports. SQLCODE
305 - "=" CODE-EDIT DELIMITED BY SIZE INTO PRINT-RECORD.
306 WRITE PRINT-RECORD.
307 STOP RUN.
* * * * * E N D O F S O U R C E * * * * *
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 619
5769ST1 V4R4M0 990521 Create SQL COBOL Program CBLEX 04/01/98 11:09:13 Page 8
CROSS REFERENCE
EMSTDATE **** COLUMN
211
E010-UPDATE-ERROR **** LABEL
131
E020-REPORT-ERROR **** LABEL
148
FIRSTNME 134 VARCHAR(12) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
FIRSTNME **** COLUMN
167
HIREDATE 134 DATE(10) COLUMN IN CORPDATA.EMPLOYEE
JOB 134 CHARACTER(8) COLUMN IN CORPDATA.EMPLOYEE
LASTNAME 134 VARCHAR(15) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
LASTNAME **** COLUMN
167
MAJPROJ 50 CHARACTER(6) IN PROJECT
MAJPROJ 213 CHARACTER(6) COLUMN IN CORPDATA.PROJECT
MIDINIT 134 CHARACTER(1) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
NAME 52 CHARACTER(30) IN RPT1
251
NAME 105 CHARACTER(30) IN RPT1-DATA
PERCENTAGE 42 DECIMAL(5,2)
135
PHONENO 134 CHARACTER(4) COLUMN IN CORPDATA.EMPLOYEE
PRENDATE 50 DATE(10) IN PROJECT
PRENDATE **** COLUMN
217
PRENDATE 213 DATE(10) COLUMN IN CORPDATA.PROJECT
PRINT-RECORD 37 CHARACTER(132)
PROJECT 50 STRUCTURE IN RPT1
PROJECT **** TABLE IN CORPDATA
213
PROJECT **** TABLE
215
PROJECT-NAME 62 CHARACTER(36) IN RPT2
PROJECT-NAME 112 CHARACTER(36) IN RPT2-DATA
PROJNAME 50 VARCHAR(24) IN PROJECT
PROJNAME **** COLUMN
210 218
PROJNAME 213 VARCHAR(24) COLUMN (NOT NULL) IN CORPDATA.PROJECT
PROJNO 50 CHARACTER(6) IN PROJECT
250
PROJNO 61 CHARACTER(6) IN RPT2
PROJNO 101 CHARACTER(6) IN RPT1-DATA
PROJNO 110 CHARACTER(6) IN RPT2-DATA
PROJNO **** COLUMN
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 621
5769ST1 V4R4M0 990521 Create SQL PL/I Program PLIEX 04/01/98 12:53:36 Page 1
Source type...............PLI
Program name..............CORPDATA/PLIEX
Source file...............CORPDATA/SRC
Member....................PLIEX
To source file............QTEMP/QSQLTEMP
Options...................*SRC *XREF
Target release............V4R4M0
INCLUDE file..............*LIBL/*SRCFILE
Commit....................*CHG
Allow copy of data........*YES
Close SQL cursor..........*ENDPGM
Allow blocking............*READ
Delay PREPARE.............*NO
Generation level..........10
Margins...................*SRCFILE
Printer file..............*LIBL/QSYSPRT
Date format...............*JOB
Date separator............*JOB
Time format...............*HMS
Time separator ...........*JOB
Replace...................*YES
Relational database.......*LOCAL
User .....................*CURRENT
RDB connect method........*DUW
Default Collection........*NONE
Package name..............*PGMLIB/*PGM
Dynamic User Profile......*USER
User Profile..............*NAMING
Sort Sequence.............*JOB
Language ID...............*JOB
IBM SQL flagging..........*NOFLAG
ANS flagging..............*NONE
Text......................*SRCMBRTXT
Source file CCSID.........65535
Job CCSID.................65535
Source member changed on 07/01/96 12:53:08
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 623
5769ST1 V4R4M0 990521 Create SQL PL/I Program PLIEX 04/01/98 12:53:36 Page 3
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
66 put file(sysprint) 6600
67 edit('PROJECT','EMPID','EMPLOYEE NAME','SALARY') 6700
68 (skip(2),col(1),a,col(10),a,col(20),a,col(55),a); 6800
69 6900
70 6
exec sql 7000
71 declare c1 cursor for 7100
72 select DISTINCT projno, EMP_ACT.empno, 7200
73 lastname||', '||firstnme, salary 7300
74 from CORPDATA/EMP_ACT, CORPDATA/EMPLOYEE 7400
75 where EMP_ACT.empno = EMPLOYEE.empno and 7500
76 comm >= :COMMISSION 7600
77 order by projno, empno; 7700
78 7
EXEC SQL 7800
79 OPEN C1; 7900
80 8000
81 /* Fetch and write the rows to SYSPRINT */ 8100
82 8
EXEC SQL WHENEVER NOT FOUND GO TO DONE1; 8200
83 8300
84 DO UNTIL (SQLCODE |= 0); 8400
85 9
EXEC SQL 8500
86 FETCH C1 INTO :RPT1.PROJNO, :rpt1.EMPNO, :RPT1.NAME, 8600
87 :RPT1.SALARY; 8700
88 PUT FILE(SYSPRINT) 8800
89 EDIT(RPT1.PROJNO,RPT1.EMPNO,RPT1.NAME,RPT1.SALARY) 8900
90 (SKIP,COL(1),A,COL(10),A,COL(20),A,COL(54),F(8,2)); 9000
91 END; 9100
92 9200
93 DONE1: 9300
94 10
EXEC SQL 9400
95 CLOSE C1; 9500
96 9600
97 /* For all projects ending at a date later than 'raise_date' */ 9700
98 /* (i.e. those projects potentially affected by the salary raises) */ 9800
99 /* generate a report containing the project number, project name */ 9900
100 /* the count of employees participating in the project and the */ 10000
101 /* total salary cost of the project. */ 10100
102 10200
103 /* Write out the header for Report 2 */ 10300
104 PUT FILE(SYSPRINT) EDIT('ACCUMULATED STATISTICS BY PROJECT') 10400
105 (SKIP(3),COL(22),A); 10500
106 PUT FILE(SYSPRINT) 10600
107 EDIT('PROJECT','NUMBER OF','TOTAL') 10700
108 (SKIP(2),COL(1),A,COL(48),A,COL(63),A); 10800
109 PUT FILE(SYSPRINT) 10900
110 EDIT('NUMBER','PROJECT NAME','EMPLOYEES','COST') 11000
111 (SKIP,COL(1),A,COL(10),A,COL(48),A,COL(63),A,SKIP); 11100
112 11200
113 11
EXEC SQL 11300
114 DECLARE C2 CURSOR FOR 11400
115 SELECT EMP_ACT.PROJNO, PROJNAME, COUNT(*), 11500
116 SUM( (DAYS(EMENDATE) - DAYS(EMSTDATE)) * EMPTIME * 11600
117 DECIMAL(( SALARY / :WORK_DAYS ),8,2) ) 11700
118 FROM CORPDATA/EMP_ACT, CORPDATA/PROJECT, CORPDATA/EMPLOYEE 11800
119 WHERE EMP_ACT.PROJNO=PROJECT.PROJNO AND 11900
120 EMP_ACT.EMPNO =EMPLOYEE.EMPNO AND 12000
121 PRENDATE > :RAISE_DATE 12100
122 GROUP BY EMP_ACT.PROJNO, PROJNAME 12200
123 ORDER BY 1; 12300
124 EXEC SQL 12400
125 OPEN C2; 12500
126 12600
127 /* Fetch and write the rows to SYSPRINT */ 12700
128 EXEC SQL WHENEVER NOT FOUND GO TO DONE2; 12800
129 12900
130 DO UNTIL (SQLCODE |= 0); 13000
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 625
5769ST1 V4R4M0 990521 Create SQL PL/I Program PLIEX 04/01/98 12:53:36 Page 5
CROSS REFERENCE
Data Names Define Reference
ACTNO 74 SMALL INTEGER PRECISION(4,0) COLUMN (NOT NULL) IN CORPDATA.EMP_ACT
BIRTHDATE 74 DATE(10) COLUMN IN CORPDATA.EMPLOYEE
BONUS 74 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
COMM **** COLUMN
52 76
COMM 74 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
COMMISSION 18 DECIMAL(8,2)
52 76
CORPDATA **** COLLECTION
50 74 74 118 118 118
C1 71 CURSOR
79 86 95
C2 114 CURSOR
125 132 141
DEPTNO 26 CHARACTER(3) IN RPT1
DEPTNO 118 CHARACTER(3) COLUMN (NOT NULL) IN CORPDATA.PROJECT
DONE1 **** LABEL
82
DONE2 **** LABEL
128
EDLEVEL 74 SMALL INTEGER PRECISION(4,0) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
EMENDATE 74 DATE(10) COLUMN IN CORPDATA.EMP_ACT
EMENDATE **** COLUMN
116
EMP_ACT **** TABLE
72 75 115 119 120 122
EMP_ACT **** TABLE IN CORPDATA
74 118
EMPLOYEE **** TABLE IN CORPDATA
50 74 118
EMPLOYEE **** TABLE
75 120
EMPLOYEE_COUNT 35 SMALL INTEGER PRECISION(4,0) IN RPT2
EMPNO 27 CHARACTER(6) IN RPT1
86
EMPNO **** COLUMN IN EMP_ACT
72 75 77 120
EMPNO **** COLUMN IN EMPLOYEE
75 120
EMPNO 74 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.EMP_ACT
EMPNO 74 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
EMPTIME 74 DECIMAL(5,2) COLUMN IN CORPDATA.EMP_ACT
EMPTIME **** COLUMN
116
EMSTDATE 74 DATE(10) COLUMN IN CORPDATA.EMP_ACT
EMSTDATE **** COLUMN
116
FIRSTNME **** COLUMN
73
FIRSTNME 74 VARCHAR(12) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
5769ST1 V4R4M0 990521 Create SQL PL/I Program PLIEX 04/01/98 12:53:36 Page 7
CROSS REFERENCE
RESPEMP 26 CHARACTER(6) IN RPT1
RESPEMP 118 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.PROJECT
RPT1 25 STRUCTURE
RPT2 32 STRUCTURE
132
SALARY 29 DECIMAL(8,2) IN RPT1
87
SALARY **** COLUMN
51 51 73 117
SALARY 74 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
SEX 74 CHARACTER(1) COLUMN IN CORPDATA.EMPLOYEE
SYSPRINT 22
TOTL_PROJ_COST 36 DECIMAL(10,2) IN RPT2
UPDATE_ERROR **** LABEL
48
WORK_DAYS 17 SMALL INTEGER PRECISION(4,0)
117
WORKDEPT 74 CHARACTER(3) COLUMN IN CORPDATA.EMPLOYEE
No errors found in source
165 Source records processed
* * * * * E N D O F L I S T I N G * * * * *
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 627
SQL Statements in RPG for AS/400 Programs
5769ST1 V4R4M0 990521 Create SQL RPG Program RPGEX 04/01/98 12:55:22 Page 1
Source type...............RPG
Program name..............CORPDATA/RPGEX
Source file...............CORPDATA/SRC
Member....................RPGEX
To source file............QTEMP/QSQLTEMP
Options...................*SRC *XREF
Target release............V4R4M0
INCLUDE file..............*LIBL/*SRCFILE
Commit....................*CHG
Allow copy of data........*YES
Close SQL cursor..........*ENDPGM
Allow blocking............*READ
Delay PREPARE.............*NO
Generation level..........10
Printer file..............*LIBL/QSYSPRT
Date format...............*JOB
Date separator............*JOB
Time format...............*HMS
Time separator ...........*JOB
Replace...................*YES
Relational database.......*LOCAL
User .....................*CURRENT
RDB connect method........*DUW
Default Collection........*NONE
Package name..............*PGMLIB/*PGM
Dynamic User Profile......*USER
User Profile...............*NAMING
Sort Sequence.............*JOB
Language ID...............*JOB
IBM SQL flagging..........*NOFLAG
ANS flagging..............*NONE
Text......................*SRCMBRTXT
Source file CCSID.........65535
Job CCSID.................65535
Source member changed on 07/01/96 17:06:17
Figure 42. Sample RPG for AS/400 Program Using SQL Statements (Part 1 of 8)
Figure 42. Sample RPG for AS/400 Program Using SQL Statements (Part 2 of 8)
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 629
5769ST1 V4R4M0 990521 Create SQL RPG Program RPGEX 04/01/98 12:55:22 Page 3
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
66 C* 6500
67 C EXCPTRECA 6600
68 6
C/EXEC SQL DECLARE C1 CURSOR FOR 6700 02/03/93
69 C+ SELECT DISTINCT PROJNO, EMP_ACT.EMPNO, 6800 02/03/93
70 C+ LASTNAME||', '||FIRSTNME, SALARY 6900 02/03/93
71 C+ FROM CORPDATA/EMP_ACT, CORPDATA/EMPLOYEE 7000 02/03/93
72 C+ WHERE EMP_ACT.EMPNO = EMPLOYEE.EMPNO AND 7100 02/03/93
73 C+ COMM >= :COMMI 7200 02/03/93
74 C+ ORDER BY PROJNO, EMPNO 7300 02/03/93
75 C/END-EXEC 7400
76 C* 7500
77 7
C/EXEC SQL 7600
78 C+ OPEN C1 7700
79 C/END-EXEC 7800
80 C* 7900
81 C* Fetch and write the rows to QPRINT. 8000
82 C* 8100
83 8
C/EXEC SQL WHENEVER NOT FOUND GO TO DONE1 8200
84 C/END-EXEC 8300
85 C SQLCOD DOUNE0 8400
86 C/EXEC SQL 8500
87 9
C+ FETCH C1 INTO :PROJNO, :EMPNO, :NAME, :SALARY 8600
88 C/END-EXEC 8700
89 C EXCPTRECB 8800
90 C END 8900
91 C DONE1 TAG 9000
92 C/EXEC SQL 9100
93 10
C+ CLOSE C1 9200
94 C/END-EXEC 9300
95 C* 9400
96 C* For all project ending at a date later than the raise date 9500
97 C* (i.e. those projects potentially affected by the salary raises) 9600
98 C* generate a report containing the project number, project name, 9700
99 C* the count of employees participating in the project and the 9800
100 C* total salary cost of the project. 9900
101 C* 10000
102 C* Write out the header for report 2. 10100
103 C* 10200
104 C EXCPTRECC 10300
105 11
C/EXEC SQL 10400
106 C+ DECLARE C2 CURSOR FOR 10500
107 C+ SELECT EMP_ACT.PROJNO, PROJNAME, COUNT(*), 10600
108 C+ SUM((DAYS(EMENDATE) - DAYS(EMSTDATE)) * EMPTIME * 10700
109 C+ DECIMAL((SALARY/:WRKDAY),8,2)) 10800
110 C+ FROM CORPDATA/EMP_ACT, CORPDATA/PROJECT, CORPDATA/EMPLOYEE 10900
111 C+ WHERE EMP_ACT.PROJNO = PROJECT.PROJNO AND 11000
112 C+ EMP_ACT.EMPNO = EMPLOYEE.EMPNO AND 11100
113 C+ PRENDATE > :RDATE 11200
114 C+ GROUP BY EMP_ACT.PROJNO, PROJNAME 11300
115 C+ ORDER BY 1 11400
116 C/END-EXEC 11500
117 C* 11600
118 C/EXEC SQL OPEN C2 11700
119 C/END-EXEC 11800
120 C* 11900
121 C* Fetch and write the rows to QPRINT. 12000
122 C* 12100
123 C/EXEC SQL WHENEVER NOT FOUND GO TO DONE2 12200
124 C/END-EXEC 12300
125 C SQLCOD DOUNE0 12400
126 C/EXEC SQL 12500
127 12
C+ FETCH C2 INTO :RPT2 12600
128 C/END-EXEC 12700
129 C EXCPTRECD 12800
130 C END 12900
Figure 42. Sample RPG for AS/400 Program Using SQL Statements (Part 3 of 8)
Figure 42. Sample RPG for AS/400 Program Using SQL Statements (Part 4 of 8)
5769ST1 V4R4M0 990521 Create SQL RPG Program RPGEX 04/01/98 12:55:22 Page 5
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
196 O SQLCODL 67 19600
* * * * * E N D O F S O U R C E * * * * *
Figure 42. Sample RPG for AS/400 Program Using SQL Statements (Part 5 of 8)
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 631
5769ST1 V4R4M0 990521 Create SQL RPG Program RPGEX 04/01/98 12:55:22 Page 6
CROSS REFERENCE
Data Names Define Reference
ACTNO 68 SMALL INTEGER PRECISION(4,0) COLUMN (NOT NULL) IN CORPDATA.EMP_ACT
BIRTHDATE 48 DATE(10) COLUMN IN CORPDATA.EMPLOYEE
BONUS 48 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
COMM **** COLUMN
48 68
COMM 48 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
COMMI 31 DECIMAL(7,2)
48 68
CORPDATA **** COLLECTION
48 68 68 105 105 105
C1 68 CURSOR
77 86 92
C2 105 CURSOR
118 126 132
DEPTNO 8 CHARACTER(3) IN RPT1
DEPTNO 105 CHARACTER(3) COLUMN (NOT NULL) IN CORPDATA.PROJECT
DONE1 91 LABEL
83
DONE2 131 LABEL
123
EDLEVEL 48 SMALL INTEGER PRECISION(4,0) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
EMENDATE 68 DATE(10) COLUMN IN CORPDATA.EMP_ACT
EMENDATE **** COLUMN
105
EMP_ACT **** TABLE
68 68 105 105 105 105
EMP_ACT **** TABLE IN CORPDATA
68 105
EMPCNT 26 SMALL INTEGER PRECISION(4,0) IN RPT2
EMPLOYEE **** TABLE IN CORPDATA
48 68 105
EMPLOYEE **** TABLE
68 105
EMPNO 17 CHARACTER(6)
86
EMPNO 48 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
EMPNO **** COLUMN IN EMP_ACT
68 68 68 105
EMPNO **** COLUMN IN EMPLOYEE
68 105
EMPNO 68 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.EMP_ACT
EMPTIME 68 DECIMAL(5,2) COLUMN IN CORPDATA.EMP_ACT
EMPTIME **** COLUMN
105
EMSTDATE 68 DATE(10) COLUMN IN CORPDATA.EMP_ACT
EMSTDATE **** COLUMN
105
FINISH 156 LABEL
FIRSTNME 48 VARCHAR(12) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
Figure 42. Sample RPG for AS/400 Program Using SQL Statements (Part 6 of 8)
Figure 42. Sample RPG for AS/400 Program Using SQL Statements (Part 7 of 8)
5769ST1 V4R4M0 990521 Create SQL RPG Program RPGEX 04/01/98 12:55:22 Page 8
CROSS REFERENCE
RESEM 8 CHARACTER(6) IN RPT1
RESPEMP 105 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.PROJECT
RPTERR 151 LABEL
59
RPT1 8 STRUCTURE
RPT2 23 STRUCTURE
126
SALARY 19 DECIMAL(9,2)
86
SALARY **** COLUMN
48 48 68 105
SALARY 48 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
SEX 48 CHARACTER(1) COLUMN IN CORPDATA.EMPLOYEE
STAFF 8 DECIMAL(5,2) IN RPT1
UPDERR 139 LABEL
45
WORKDEPT 48 CHARACTER(3) COLUMN IN CORPDATA.EMPLOYEE
WRKDAY 30 SMALL INTEGER PRECISION(4,0)
105
No errors found in source
196 Source records processed
* * * * * E N D O F L I S T I N G * * * * *
Figure 42. Sample RPG for AS/400 Program Using SQL Statements (Part 8 of 8)
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 633
SQL Statements in ILE RPG for AS/400 Programs
5769ST1 V4R4M0 990521 Create SQL ILE RPG Object RPGLEEX 04/01/98 16:03:02 Page 1
Source type...............RPG
Object name...............CORPDATA/RPGLEEX
Source file...............CORPDATA/SRC
Member....................*OBJ
To source file............QTEMP/QSQLTEMP1
Options...................*XREF
Listing option............*PRINT
Target release............V4R4M0
INCLUDE file..............*LIBL/*SRCFILE
Commit....................*CHG
Allow copy of data........*YES
Close SQL cursor..........*ENDMOD
Allow blocking............*READ
Delay PREPARE.............*NO
Generation level..........10
Printer file..............*LIBL/QSYSPRT
Date format...............*JOB
Date separator............*JOB
Time format...............*HMS
Time separator ...........*JOB
Replace...................*YES
Relational database.......*LOCAL
User .....................*CURRENT
RDB connect method........*DUW
Default Collection........*NONE
Package name..............*OBJLIB/*OBJ
Created object type.......*PGM
Debugging view............*NONE
Dynamic User Profile......*USER
User Profile..............*NAMING
Sort Sequence.............*JOB
Language ID...............*JOB
IBM SQL flagging..........*NOFLAG
ANS flagging..............*NONE
Text......................*SRCMBRTXT
Source file CCSID.........65535
Job CCSID.................65535
Source member changed on 07/01/96 15:55:32
Figure 43. Sample ILE RPG for AS/400 Program Using SQL Statements (Part 1 of 7)
Figure 43. Sample ILE RPG for AS/400 Program Using SQL Statements (Part 2 of 7)
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 635
5769ST1 V4R4M0 990521 Create SQL ILE RPG Object RPGLEEX 04/01/98 16:03:02 Page 3
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change Comments
66 C+ WHERE EMP_ACT.EMPNO = EMPLOYEE.EMPNO AND 6600
67 C+ COMM >= :COMMI 6700
68 C+ ORDER BY PROJNO, EMPNO 6800
69 C/END-EXEC 6900
70 C* 7000
71 7
C/EXEC SQL 7100
72 C+ OPEN C1 7200
73 C/END-EXEC 7300
74 C* 7400
75 C* Fetch and write the rows to QPRINT. 7500
76 C* 7600
77 8
C/EXEC SQL WHENEVER NOT FOUND GO TO DONE1 7700
78 C/END-EXEC 7800
79 C SQLCOD DOUNE 0 7900
80 C/EXEC SQL 8000
81 9
C+ FETCH C1 INTO :PROJNO, :EMPNO, :NAME, :SALARY 8100
82 C/END-EXEC 8200
83 C EXCEPT RECB 8300
84 C END 8400
85 C DONE1 TAG 8500
86 C/EXEC SQL 8600
87 10
C+ CLOSE C1 8700
88 C/END-EXEC 8800
89 C* 8900
90 C* For all project ending at a date later than the raise date 9000
91 C* (i.e. those projects potentially affected by the salary raises) 9100
92 C* generate a report containing the project number, project name, 9200
93 C* the count of employees participating in the project and the 9300
94 C* total salary cost of the project. 9400
95 C* 9500
96 C* Write out the header for report 2. 9600
97 C* 9700
98 C EXCEPT RECC 9800
99 C/EXEC SQL 9900
100 11
C+ DECLARE C2 CURSOR FOR 10000
101 C+ SELECT EMP_ACT.PROJNO, PROJNAME, COUNT(*), 10100
102 C+ SUM((DAYS(EMENDATE) - DAYS(EMSTDATE)) * EMPTIME * 10200
103 C+ DECIMAL((SALARY/:WRKDAY),8,2)) 10300
104 C+ FROM CORPDATA/EMP_ACT, CORPDATA/PROJECT, CORPDATA/EMPLOYEE 10400
105 C+ WHERE EMP_ACT.PROJNO = PROJECT.PROJNO AND 10500
106 C+ EMP_ACT.EMPNO = EMPLOYEE.EMPNO AND 10600
107 C+ PRENDATE > :RDATE 10700
108 C+ GROUP BY EMP_ACT.PROJNO, PROJNAME 10800
109 C+ ORDER BY 1 10900
110 C/END-EXEC 11000
111 C* 11100
112 C/EXEC SQL OPEN C2 11200
113 C/END-EXEC 11300
114 C* 11400
115 C* Fetch and write the rows to QPRINT. 11500
116 C* 11600
117 C/EXEC SQL WHENEVER NOT FOUND GO TO DONE2 11700
118 C/END-EXEC 11800
119 C SQLCOD DOUNE 0 11900
120 C/EXEC SQL 12000
121 12
C+ FETCH C2 INTO :RPT2 12100
122 C/END-EXEC 12200
123 C EXCEPT RECD 12300
124 C END 12400
125 C DONE2 TAG 12500
126 C/EXEC SQL CLOSE C2 12600
127 C/END-EXEC 12700
128 C RETURN 12800
129 C* 12900
130 C* Error occured while updating table. Inform user and rollback 13000
Figure 43. Sample ILE RPG for AS/400 Program Using SQL Statements (Part 3 of 7)
Figure 43. Sample ILE RPG for AS/400 Program Using SQL Statements (Part 4 of 7)
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 637
5769ST1 V4R4M0 990521 Create SQL ILE RPG Object RPGLEEX 04/01/98 16:03:02 Page 5
CROSS REFERENCE
Data Names Define Reference
ACTNO 62 SMALL INTEGER PRECISION(4,0) COLUMN (NOT NULL) IN CORPDATA.EMP_ACT
BIRTHDATE 42 DATE(10) COLUMN IN CORPDATA.EMPLOYEE
BONUS 42 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
COMM **** COLUMN
42 62
COMM 42 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
COMMI 25 DECIMAL(7,2)
42 62
CORPDATA **** COLLECTION
42 62 62 99 99 99
C1 62 CURSOR
71 80 86
C2 99 CURSOR
112 120 126
DEPTNO 8 CHARACTER(3) IN RPT1
DEPTNO 99 CHARACTER(3) COLUMN (NOT NULL) IN CORPDATA.PROJECT
DONE1 85
DONE1 **** LABEL
77
DONE2 125
DONE2 **** LABEL
117
EDLEVEL 42 SMALL INTEGER PRECISION(4,0) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
EMENDATE 62 DATE(10) COLUMN IN CORPDATA.EMP_ACT
EMENDATE **** COLUMN
99
EMP_ACT **** TABLE
62 62 99 99 99 99
EMP_ACT **** TABLE IN CORPDATA
62 99
EMPCNT 20 SMALL INTEGER PRECISION(4,0) IN RPT2
EMPLOYEE **** TABLE IN CORPDATA
42 62 99
EMPLOYEE **** TABLE
62 99
EMPNO 11 CHARACTER(6) DBCS-open
80
EMPNO 42 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.EMPLOYEE
EMPNO **** COLUMN IN EMP_ACT
62 62 62 99
EMPNO **** COLUMN IN EMPLOYEE
62 99
EMPNO 62 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.EMP_ACT
EMPTIME 62 DECIMAL(5,2) COLUMN IN CORPDATA.EMP_ACT
EMPTIME **** COLUMN
99
EMSTDATE 62 DATE(10) COLUMN IN CORPDATA.EMP_ACT
EMSTDATE **** COLUMN
99
FINISH 150
Figure 43. Sample ILE RPG for AS/400 Program Using SQL Statements (Part 5 of 7)
Figure 43. Sample ILE RPG for AS/400 Program Using SQL Statements (Part 6 of 7)
5769ST1 V4R4M0 990521 Create SQL ILE RPG Object RPGLEEX 04/01/98 16:03:02 Page 7
CROSS REFERENCE
RDATE 26 CHARACTER(10) DBCS-open
99
RESPEMP 8 CHARACTER(6) IN RPT1
RESPEMP 99 CHARACTER(6) COLUMN (NOT NULL) IN CORPDATA.PROJECT
RPTERR 145
RPTERR **** LABEL
53
RPT1 8 STRUCTURE
RPT2 17 STRUCTURE
120
SALARY 13 DECIMAL(9,2)
80
SALARY **** COLUMN
42 42 62 99
SALARY 42 DECIMAL(9,2) COLUMN IN CORPDATA.EMPLOYEE
SEX 42 CHARACTER(1) COLUMN IN CORPDATA.EMPLOYEE
UPDERR 133
UPDERR **** LABEL
39
WORKDEPT 42 CHARACTER(3) COLUMN IN CORPDATA.EMPLOYEE
WRKDAY 24 SMALL INTEGER PRECISION(4,0)
99
No errors found in source
190 Source records processed
* * * * * E N D O F L I S T I N G * * * * *
Figure 43. Sample ILE RPG for AS/400 Program Using SQL Statements (Part 7 of 7)
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 639
SQL Statements in REXX Programs
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 641
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8
121
122 /* Go to the common error handler */
123 SIGNAL ON ERROR
124
125 SELECT_STMT = 'SELECT EMP_ACT.PROJNO, PROJNAME, COUNT(*), ',
126 ' SUM( (DAYS(EMENDATE) - DAYS(EMSTDATE)) * EMPTIME * ',
127 ' DECIMAL(( SALARY / ? ),8,2) ) ',
128 'FROM CORPDATA/EMP_ACT, CORPDATA/PROJECT, CORPDATA/EMPLOYEE',
129 'WHERE EMP_ACT.PROJNO = PROJECT.PROJNO AND ',
130 ' EMP_ACT.EMPNO = EMPLOYEE.EMPNO AND ',
131 ' PRENDATE > ? ',
132 'GROUP BY EMP_ACT.PROJNO, PROJNAME ',
133 'ORDER BY 1 '
134 EXECSQL,
135 'PREPARE S3 FROM :SELECT_STMT'
136 11
EXECSQL,
137 'DECLARE C2 CURSOR FOR S3'
138 EXECSQL,
139 'OPEN C2 USING :WORK_DAYS, :RAISE_DATE'
140
141 /* Handle the FETCH errors and warnings inline */
142 SIGNAL OFF ERROR
143
144 /* Fetch all of the rows */
145 DO UNTIL (SQLCODE <> 0)
146 12
EXECSQL,
147 'FETCH C2 INTO :RPT2.PROJNO, :RPT2.PROJNAME, ',
148 ' :RPT2.EMPCOUNT, :RPT2.TOTAL_COST '
149
150 /* Process any errors that may have occurred. Continue so that */
151 /* we close the cursor for any warnings. */
152 IF SQLCODE < 0 THEN
153 SIGNAL ERROR
154
155 /* Stop the loop when we hit the EOF. Don't try to print out the */
156 /* fetched values. */
157 IF SQLCODE = 100 THEN
158 LEAVE
159
160 /* Print out the fetched row */
161 SAY RPT2.PROJNO ' ' RPT2.PROJNAME ' ' ,
162 RPT2.EMPCOUNT ' ' RPT2.TOTAL_COST
163 END;
164
165 EXECSQL,
166 'CLOSE C2'
167
168 /* Delete the OVRDBF so that we will continue writing to the output */
169 /* display. */
170 ADDRESS '*COMMAND',
171 'DLTOVR FILE(STDOUT)'
172
173 /* Leave procedure with a successful or warning RC */
174 EXIT RC
175
176
177 /* Error occurred while updating the table or generating the */
178 /* reports. If the error occurred on the UPDATE, rollback all of */
179 /* the changes. If it occurred on the report generation, display the */
180 /* REXX RC variable and the SQLCODE and exit the procedure. */
181 ERROR:
182
183 13
SIGNAL OFF ERROR
184
185 /* Determine the error location */
186 SELECT
187 /* When the error occurred on the UPDATE statement */
188 WHEN ERRLOC = 'UPDATE_ERROR' THEN
190 DO
191 SAY '*** ERROR Occurred while updating table.',
192 'SQLCODE = ' SQLCODE
193 14
EXECSQL,
194 'ROLLBACK'
195 END
Appendix C. Sample Programs Using DB2 UDB for AS/400 Statements 643
MA2113 W L PROD CONT PROGS 5 71509.11
OP1000 OPERATION SUPPORT 1 16348.86
OP1010 OPERATION 5 167828.76
OP2010 SYSTEMS SUPPORT 2 91612.62
OP2011 SCP SYSTEMS SUPPORT 2 31224.60
OP2012 APPLICATIONS SUPPORT 2 41294.88
OP2013 DB/DC SUPPORT 2 37311.12
PL2100 WELD LINE PLANNING 1 43576.92
*CURLIB/
ÊÊ CRTSQLCBL PGM( program-name ) Ê
library-name/
Ê Ê
*LIBL/ QLBLSRC
SRCFILE( source-file-name )
*CURLIB/
library-name/
(1)
Ê Ê
*PGM
SRCMBR( source-file-member-name )
Ê Ê
OPTION( OPTION Details ) *CURRENT
TGTRLS( *PRV )
VxRxMx
Ê Ê
*LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
Ê Ê
*UR *ENDPGM
*CHG CLOSQLCSR( *ENDSQL )
COMMIT( *ALL ) *ENDJOB
*RS
*CS
*NONE
*NC
*RR
Ê Ê
*OPTIMIZE *ALLREAD
ALWCPYDTA( *YES ) ALWBLK( *NONE )
*NO *READ
Ê Ê
*JOB *JOB
DATFMT( *USA ) DATSEP( '/' )
*ISO '.'
*EUR ','
*JIS '-'
*MDY ' '
*DMY *BLANK
*YMD
*JUL
Ê Ê
*HMS *JOB
TIMFMT( *USA ) TIMSEP( ':' )
*ISO '.'
*EUR ','
*JIS ' '
*BLANK
Ê Ê
*YES
REPLACE( *NO )
Ê Ê
*LOCAL *CURRENT
RDB( relational-database-name ) USER( user-name )
*NONE
Ê Ê
*NONE *DUW
PASSWORD( password ) RDBCNNMTH( *RUW )
Ê Ê
*NONE *NO
DFTRDBCOL( collection-name ) DYNDFTCOL( *YES )
Ê Ê
*PGMLIB/ *PGM
SQLPKG( package-name )
library-name/
· collection-name
Ê Ê
*NOFLAG *NONE
SAAFLAG( *FLAG ) FLAGSTD( *ANS )
Ê Ê
*LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
Ê Ê
*JOB
SRTSEQ( *JOBRUN )
*LANGIDUNQ
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
Ê Ê
*JOB *NAMING
LANGID( *JOBRUN ) USRPRF( *OWNER )
language-ID *USER
Ê Ê
*USER
DYNUSRPRF( *OWNER )
Ê Ê
QTEMP/ QSQLTEMP
TOSRCFILE( source-file-name )
*LIBL/
*CURLIB/
library-name/
Ê ÊÍ
*SRCMBRTXT
TEXT( *BLANK )
'description'
*NOSRC
*NOSOURCE *NOXREF *GEN *JOB *QUOTESQL
Ê
*SOURCE *XREF *NOGEN *PERIOD *APOSTSQL
*SRC *SYSVAL
*COMMA
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Create Structured Query Language COBOL (CRTSQLCBL) command calls the
Structured Query Language (SQL) precompiler, which precompiles COBOL source
containing SQL statements, produces a temporary source member, and then
optionally calls the COBOL compiler to compile the program.
Parameters
PGM
Specifies the qualified name of the compiled program.
The name of the compiled COBOL program can be qualified by one of the
following library values:
*CURLIB The compiled COBOL program is created in the current library for
the job. If no library is specified as the current library for the job, the QGPL
library is used.
library name: Specify the name of the library where the compiled COBOL
program is created.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
*PGM: Specifies that the COBOL source is in the member of the source file that
has the same name as that specified on the PGM parameter.
*GEN: The compiler creates a program that can run after the program is
compiled. An SQL package object is created if a relational database name is
specified on the RDB parameter.
*NOGEN: The precompiler does not call the COBOL compiler, and a program
and SQL package are not created.
*JOB: The value used as the decimal point for numeric constants in SQL is the
representation of decimal point specified for the job at precompile time.
*SYSVAL: The value used as the decimal point for numeric constants in SQL
statements is the QDECFMT system value.
Note: If QDECFMT specifies that the value used as the decimal point is a
comma, any numeric constants in lists (such as in the SELECT clause or
the VALUES clause) must be separated by a comma followed by a
*PERIOD: The value used as the decimal point for numeric constants in SQL
statements is a period.
*COMMA: The value used as the decimal point for numeric constants in SQL
statements is a comma.
Note: Any numeric constants in lists (such as in the SELECT clause or the
VALUES clause) must be separated by a comma followed by a blank.
For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) where the decimal point is a period.
*QUOTESQL: A double quote (") is the string delimiter in the SQL statements.
*QUOTE: A double quote (") is used for non-numeric literals and Boolean
literals in the COBOL statements.
*APOST: An apostrophe (') is used for non-numeric literals and Boolean literals
in the COBOL statements.
*SECLVL: Second-level text with replacement data is added for all messages
on the listing.
*LSTDBG: The SQL precompiler generates a listing view, and error and debug
information required for this view. You can use *LSTDBG only if you are using
the CODE/400 product to compile your program.
TGTRLS
Specifies the release of the operating system on which the user intends to use
the object being created.
release-level: Specify the release in the format VxRxMx. The object can be
used on a system with the specified release or with any subsequent release of
the operating system installed.
Valid values depend on the current version, release, and modification level, and
they change with each new release. If you specify a release-level which is
earlier than the earliest release level supported by this command, an error
message is sent indicating the earliest supported release.
INCFILE
Specifies the qualified name of the source file that contains members included
in the program with any SQL INCLUDE statement.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
source-file-name: Specify the name of the source file that contains the source
file member(s) specified on any SQL INCLUDE statement. The record length of
the source file specified here must be no less than the record length of the
source file specified for the SRCFILE parameter.
COMMIT
Specifies whether SQL statements in the compiled program are run under
commitment control. Files referred to in the host language source are not
affected by this option. Only SQL tables, SQL views, and SQL packages
referred to in SQL statements are affected.
Note: Files referenced in the COBOL source are not affected by this option.
*CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows updated, deleted, and inserted are locked until the end of the unit of
work (transaction). A row that is selected, but not updated, is locked until the
next row is selected. Uncommitted changes in other jobs cannot be seen.
*RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows selected, updated, deleted, and inserted are locked until the end of the
unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
are locked exclusively until the end of the unit of work (transaction).
CLOSQLCSR
Specifies when SQL cursors are implicitly closed, SQL prepared statements are
implicitly discarded, and LOCK TABLE locks are released. SQL cursors are
explicitly closed when you issue the CLOSE, COMMIT, or ROLLBACK (without
HOLD) SQL statements.
*ENDPGM: SQL cursors are closed and SQL prepared statements are
discarded when the program ends. LOCK TABLE locks are released when the
first SQL program on the call stack ends.
*ENDSQL: SQL cursors remain open between calls and can be fetched without
running another SQL OPEN. One of the programs higher on the call stack must
have run at least one SQL statement. SQL cursors are closed, SQL prepared
statements are discarded, and LOCK TABLE locks are released when the first
SQL program on the call stack ends. If *ENDSQL is specified for a program that
is the first SQL program called (the first SQL program on the call stack), the
program is treated as if *ENDPGM was specified.
*ENDJOB: SQL cursors remain open between calls and can be fetched without
running another SQL OPEN. The programs higher on the call stack do not need
to have run SQL statements. SQL cursors are left open, SQL prepared
statements are preserved, and LOCK TABLE locks are held when the first SQL
program on the call stack ends. SQL cursors are closed, SQL prepared
statements are discarded, and LOCK TABLE locks are released when the job
ends.
| *OPTIMIZE: The system determines whether to use the data retrieved directly
| from the database or to use a copy of the data. The decision is based on which
| method provides the best performance. If COMMIT is *CHG or *CS and
| ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
| data is used only when it is necessary to run a query.
| *NO: A copy of the data is not allowed. If a temporary copy of the data is
| required to perform the query, an error message is returned.
| ALWBLK
| Specifies whether the database manager can use record blocking, and the
| extent to which blocking can be used for read-only cursors.
| Specifying *ALLREAD:
| v Allows record blocking under commitment control level *CHG in addition to
| the blocking allowed for *READ.
| v Can improve the performance of almost all read-only cursors in programs,
| but limits queries in the following ways:
| – The Rollback (ROLLBACK) command, a ROLLBACK statement in host
| languages, or the ROLLBACK HOLD SQL statement does not reposition a
| read-only cursor when *ALLREAD is specified.
| – Dynamic running of a positioned UPDATE or DELETE statement (for
| example, using EXECUTE IMMEDIATE), cannot be used to update a row
| in a cursor unless the DECLARE statement for the cursor includes the
| FOR UPDATE clause.
| *NONE: Rows are not blocked for retrieval of data for cursors.
| Specifying *NONE:
| v Guarantees that the data retrieved is current.
| v May reduce the amount of time required to retrieve the first row of data for a
| query.
| v Stops the database manager from retrieving a block of data rows that is not
| used by the program when only the first few rows of a query are retrieved
| before the query is closed.
| v Can degrade the overall performance of a query that retrieves a large
| number of rows.
| *READ: Records are blocked for read-only retrieval of data for cursors when:
| v *NONE is specified on the COMMIT parameter, which indicates that
| commitment control is not used.
| v The cursor is declared with a FOR READ ONLY clause or there are no
| dynamic statements that could run a positioned UPDATE or DELETE
| statement for the cursor.
Note: If you specify *YES, performance is not improved if the INTO clause is
used on the PREPARE statement or if a DESCRIBE statement uses the
dynamic statement before an OPEN is issued for the statement.
GENLVL
Specifies the severity level at which the create operation fails. If errors occur
that have a severity level greater than or equal to this value, the operation
ends.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
command to determine the current date format for the job.
Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
specified on the DATFMT parameter.
*JOB: The date separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*USA: The United States time format (hh:mm xx) is used, where xx is AM or
PM.
Note: This parameter applies only when *HMS is specified on the TIMFMT
parameter.
*JOB: The time separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
*YES: A new program or SQL package is created, and any existing program or
SQL package of the same name and type in the specified library is moved to
QRPLOBJ.
*NO: A new program or SQL package is not created if an object of the same
name and type already exists in the specified library.
RDB
Specifies the name of the relational database where the SQL package object is
created.
*NONE: An SQL package object is not created. The program object is not a
distributed program and the Create Structured Query Language Package
(CRTSQLPKG) command cannot be used.
USER
Specifies the user name sent to the remote system when starting the
conversation. This parameter is valid only when RDB is specified.
*CURRENT: The user profile under which the current job is running is used.
password: Specify the password of the user name specified on the USER
parameter.
RDBCNNMTH
Specifies the semantics used for CONNECT statements. Refer to the SQL
Reference book for more information.
*RUW: CONNECT (Type 1) semantics are used to support remote unit of work.
Consecutive CONNECT statements result in the previous connection being
disconnected before a new connection is established.
DFTRDBCOL
Specifies the collection name used for the unqualified names of tables, views,
indexes, and SQL packages. This parameter applies only to static SQL
statements.
collection-name: Specify the name of the collection identifier. This value is used
instead of the naming convention specified on the OPTION parameter.
| DYNDFTCOL
| Specifies whether the default collection name specified for the DFTRDBCOL
| parameter is also used for dynamic statements.
| *NO: Do not use the value specified on the DFTRDBCOL parameter for
| unqualified names of tables, views, indexes, and SQL packages for dynamic
| SQL statements. The naming convention specified on the OPTION parameter is
| used.
| *NAMING: The path used depends on the naming convention specified on the
| OPTION parameter.
| For *SYS naming, the path used is *LIBL, the current library list at runtime.
| For *SQL naming, the path used is ″QSYS″, ″QSYS2″, ″userid″, where ″userid″
| is the value of the USER special register. If a collection-name is specified on
| the DFTRDBCOL parameter, the collection-name takes the place of userid.
*NOFLAG: The precompiler does not check to see whether SQL statements
conform to IBM SQL syntax.
*NONE: The precompiler does not check to see whether SQL statements
conform to ANSI standards.
The name of the printer file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
printer-file-name: Specify the name of the printer device file to which the
precompiler printout is directed.
SRTSEQ
Specifies the sort sequence table to be used for string comparisons in SQL
statements.
*JOB: The SRTSEQ value for the job is retrieved during the precompile.
*JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
For distributed applications, SRTSEQ(*JOBRUN) is valid only when
LANGID(*JOBRUN) is also specified.
*LANGIDUNQ: The unique-weight sort table for the language specified on the
LANGID parameter is used.
*LANGIDSHR: The shared-weight sort table for the language specified on the
LANGID parameter is used.
*HEX: A sort sequence table is not used. The hexadecimal values of the
characters are used to determine the sort sequence.
The name of the sort sequence table can be qualified by one of the following
library values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
*JOB: The LANGID value for the job is retrieved during the precompile.
*JOBRUN: The LANGID value for the job is retrieved when the program is run.
For distributed applications, LANGID(*JOBRUN) is valid only when
SRTSEQ(*JOBRUN) is also specified.
*USER: The profile of the user running the program object is used.
*OWNER: The user profiles of both the program owner and the program user
are used when the program is run.
DYNUSRPRF
Specifies the user profile used for dynamic SQL statements.
*USER: Local dynamic SQL statements are run under the user profile of the
job. Distributed dynamic SQL statements are run under the user profile of the
application server job.
*OWNER: Local dynamic SQL statements are run under the user profile of the
program’s owner. Distributed dynamic SQL statements are run under the user
profile of the SQL package’s owner.
| TOSRCFILE
| Specifies the qualified name of the source file that is to contain the output
| source member that has been processed by the SQL precompiler. If the
| specified source file is not found, it will be created. The output member will
| have the same name as the name that is specified for the SRCMBR parameter.
| source-file-name: Specify the name of the source file to contain the output
| source member.
TEXT
Specifies the text that briefly describes the program and its function. More
information on this parameter is in Appendix A, ″Expanded Parameter
Descriptions″ in the CL Reference (Abridged) book.
*SRCMBRTXT: The text is taken from the source file member being used to
create the COBOL program. Text for a database source member can be added
or changed by using the Start Source Entry Utility (STRSEU) command, or by
using either the Add Physical File Member (ADDPFM) or Change Physical File
Member (CHGPFM) command. If the source file is an inline file or a device file,
the text is blank.
Example
CRTSQLCBL PGM(ACCTS/STATS) SRCFILE(ACCTS/ACTIVE)
TEXT('Statistical analysis program for
active accounts')
This command runs the SQL precompiler which precompiles the source and stores
the changed source in the member STATS in file QSQLTEMP in library QTEMP.
The COBOL compiler is called to create program STATS in library ACCTS using the
source member created by the SQL precompiler.
*CURLIB/
ÊÊ CRTSQLCBLI OBJ( object-name ) Ê
library-name/
Ê Ê
*LIBL/ QCBLLESRC
SRCFILE( source-file-name )
*CURLIB/
library-name/
(1)
Ê Ê
*OBJ
SRCMBR( source-file-member-name )
Ê Ê
OPTION( OPTION Details ) *CURRENT
TGTRLS( *PRV )
VxRxMx
Ê Ê
*PGM
OBJTYPE( *MODULE )
*SRVPGM
Ê Ê
*LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
Ê Ê
*OPTIMIZE *ALLREAD
ALWCPYDTA( *YES ) ALWBLK( *NONE )
*NO *READ
Ê Ê
*NO 10
DLYPRP( *YES ) GENLVL( severity-level )
Ê Ê
*JOB *JOB
DATFMT( *USA ) DATSEP( '/' )
*ISO '.'
*EUR ','
*JIS '-'
*MDY ' '
*DMY *BLANK
*YMD
*JUL
Ê Ê
*HMS *JOB
TIMFMT( *USA ) TIMSEP( ':' )
*ISO '.'
*EUR ','
*JIS ' '
*BLANK
Ê Ê
*YES *LOCAL
REPLACE( *NO ) RDB( relational-database-name )
*NONE
Ê Ê
*CURRENT *NONE
USER( user-name ) PASSWORD( password )
Ê Ê
*DUW *NONE
RDBCNNMTH( *RUW ) DFTRDBCOL( collection-name )
Ê Ê
*OBJLIB/ *OBJ
SQLPKG( package-name )
library-name/
Ê Ê
*NAMING
SQLPATH( *LIBL )
· collection-name
Ê Ê
*NOFLAG *NONE
SAAFLAG( *FLAG ) FLAGSTD( *ANS )
Ê Ê
*NONE *NAMING
DBGVIEW( *SOURCE ) USRPRF( *OWNER )
*USER
Ê Ê
*USER
DYNUSRPRF( *OWNER )
Ê Ê
*JOB
SRTSEQ( *JOBRUN )
*LANGIDUNQ
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
Ê Ê
*JOB *NONE
LANGID( *JOBRUN ) OUTPUT( *PRINT )
language-identifier
Ê Ê
*LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
Ê ÊÍ
*SRCMBRTXT
TEXT( *BLANK )
'description'
OPTION Details:
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Create Structured Query Language ILE COBOL Object (CRTSQLCBLI)
command calls the Structured Query Language (SQL) precompiler which
precompiles COBOL source containing SQL statements, produces a temporary
source member, and then optionally calls the ILE COBOL compiler to create a
module, a program, or a service program.
Parameters
OBJ
Specifies the qualified name of the object being created.
*CURLIB: The new object is created in the current library for the job. If no
library is specified as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library where the object is created.
The name of the source file can be qualified by one of the following library
values:
QCBLLESRC: If the source file name is not specified, the source file
QCBLLESRC contains the COBOL source.
source-file-name: Specify the name of the source file that contains the COBOL
source.
SRCMBR
Specifies the name of the source file member that contains the COBOL source.
This parameter is specified only if the source file name in the SRCFILE
parameter is a database file. If this parameter is not specified, the OBJ name
specified on the OBJ parameter is used.
*OBJ: Specifies that the COBOL source is in the member of the source file that
has the same name as that specified on the OBJ parameter.
*GEN: The precompiler creates the object that is specified by the OBJTYPE
parameter.
*NOGEN: The precompiler does not call the ILE COBOL compiler, and a
module, program, service program, or SQL package are not created.
*JOB: The value used as the decimal point for numeric constants in SQL is the
representation of decimal point specified for the job at precompile time.
*SYSVAL: The value used as the decimal point for numeric constants in SQL
statements is the QDECFMT system value.
*PERIOD: The value used as the decimal point for numeric constants in SQL
statements is a period (.).
*COMMA: The value used as the decimal point for numeric constants in SQL
statements is a comma (,).
Note: Any numeric constants in lists (such as in the SELECT clause or the
VALUES clause) must be separated by a comma (,) followed by a blank(
). For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) where the decimal point is a period(.).
*SECLVL: Second-level text with replacement data is added for all messages
on the listing.
*QUOTESQL: A double quote (") is the string delimiter in the SQL statements.
*QUOTE: A double quote (") is used for literals which are not numeric and
Boolean literals in the COBOL statements.
*APOST: An apostrophe (') is used for literals which are not numeric and
Boolean literals in the COBOL statements.
*NOEVENTF: The compiler will not produce an event file for use by
CoOperative Development Environment/400 (CODE/400).
If the first FETCH uses a LOB locator to access a LOB column, no subsequent
FETCH for that cursor can fetch that LOB column into a LOB host variable.
If the first FETCH places the LOB column into a LOB host variable, no
subsequent FETCH for that cursor can use a LOB locator for that column.
In the examples given for the *CURRENT and *PRV values, and when
specifying the release-level value, the format VxRxMx is used to specify the
release, where Vx is the version, Rx is the release, and Mx is the modification
level. For example, V2R3M0 is version 2, release 3, modification level 0.
release-level: Specify the release in the format VxRxMx. The object can be
used on a system with the specified release or with any subsequent release of
the operating system installed.
Valid values depend on the current version, release, and modification level, and
they change with each new release. If you specify a release-level which is
earlier than the earliest release level supported by this command, an error
message is sent indicating the earliest supported release.
OBJTYPE
Specifies the type of object being created.
*PGM: The SQL precompiler issues the CRTBNDCBL command to create the
bound program.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
source-file-name: Specify the name of the source file that contains the source
file members specified on any SQL INCLUDE statement. The record length of
the source file specified here must be no less than the record length of the
source file specified on the SRCFILE parameter.
COMMIT
Specifies whether SQL statements in the compiled unit are run under
commitment control. Files referred to in the host language source are not
affected by this option. Only SQL tables, SQL views, and SQL packages
referred to in SQL statements are affected.
*CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows updated, deleted, and inserted are locked until the end of the unit of
*RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows selected, updated, deleted, and inserted are locked until the end of the
unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
are locked exclusively until the end of the unit of work (transaction).
CLOSQLCSR
Specifies when SQL cursors are implicitly closed, SQL prepared statements are
implicitly discarded, and LOCK TABLE locks are released. SQL cursors are
explicitly closed when you issue the CLOSE, COMMIT, or ROLLBACK (without
HOLD) SQL statements.
*ENDACTGRP: SQL cursors are closed, SQL prepared statements are implicitly
discarded, and LOCK TABLE locks are released when the activation group
ends.
*ENDMOD: SQL cursors are closed and SQL prepared statements are implicitly
discarded when the module is exited. LOCK TABLE locks are released when
the activation group ends.
| ALWCPYDTA
| Specifies whether a copy of the data can be used in a SELECT statement.
| *OPTIMIZE: The system determines whether to use the data retrieved directly
| from the database or to use a copy of the data. The decision is based on which
| method provides the best performance. If COMMIT is *CHG or *CS and
| ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
| data is used only when it is necessary to run a query.
| *NO: A copy of the data is not allowed. If a temporary copy of the data is
| required to perform the query, an error message is returned.
| ALWBLK
| Specifies whether the database manager can use record blocking, and the
| extent to which blocking can be used for read-only cursors.
| Specifying *ALLREAD:
| v Allows record blocking under commitment control level *CHG in addition to
| the blocking allowed for *READ.
| *NONE: Rows are not blocked for retrieval of data for cursors.
| Specifying *NONE:
| v Guarantees that the data retrieved is current.
| v May reduce the amount of time required to retrieve the first row of data for a
| query.
| v Stops the database manager from retrieving a block of data rows that is not
| used by the program when only the first few rows of a query are retrieved
| before the query is closed.
| v Can degrade the overall performance of a query that retrieves a large
| number of rows.
| *READ: Records are blocked for read-only retrieval of data for cursors when:
| v *NONE is specified on the COMMIT parameter, which indicates that
| commitment control is not used.
| v The cursor is declared with a FOR READ ONLY clause or there are no
| dynamic statements that could run a positioned UPDATE or DELETE
| statement for the cursor.
| Specifying *READ can improve the overall performance of queries that meet the
| above conditions and retrieve a large number of records.
DLYPRP
Specifies whether the dynamic statement validation for a PREPARE statement
is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
validation improves performance by eliminating redundant validation.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
command to determine the current date format for the job.
Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
specified on the DATFMT parameter.
*JOB: The date separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
Note: This parameter applies only when *HMS is specified on the TIMFMT
parameter.
*JOB: The time separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
*NO: A new SQL module, program, service program, or package is not created
if an SQL object of the same name and type already exists in the specified
library.
RDB
Specifies the name of the relational database where the SQL package object is
created.
*NONE: An SQL package object is not created. The program object is not a
distributed program and the Create Structured Query Language Package
(CRTSQLPKG) command cannot be used.
USER
Specifies the user name sent to the remote system when starting the
conversation. This parameter is valid only when RDB is specified.
*CURRENT: The user profile under which the current job is running is used.
user-name: Specify the user name being used for the application server job.
PASSWORD
Specifies the password to be used on the remote system. This parameter is
valid only if RDB is specified.
password: Specify the password of the user name specified on the USER
parameter.
RDBCNNMTH
Specifies the semantics used for CONNECT statements. Refer to the SQL
Reference book for more information.
collection-name: Specify the name of the collection identifier. This value is used
instead of the naming convention specified on the OPTION parameter.
| DYNDFTCOL
| Specifies whether the default collection name specified for the DFTRDBCOL
| parameter is also used for dynamic statements.
| *NO: Do not use the value specified on the DFTRDBCOL parameter for
| unqualified names of tables, views, indexes, and SQL packages for dynamic
| SQL statements. The naming convention specified on the OPTION parameter is
| used.
*OBJ: The name of the SQL package is the same as the object name specified
on the OBJ parameter.
package-name: Specify the name of the SQL package. If the remote system is
not an AS/400 system, no more than 8 characters can be specified.
| SQLPATH
| Specifies the path to be used to find procedures, functions, and user defined
| types in static SQL statements.
| *NAMING: The path used depends on the naming convention specified on the
| OPTION parameter.
| For *SYS naming, the path used is *LIBL, the current library list at runtime.
| For *SQL naming, the path used is ″QSYS″, ″QSYS2″, ″userid″, where ″userid″
| is the value of the USER special register. If a collection-name is specified on
| the DFTRDBCOL parameter, the collection-name takes the place of userid.
*NOFLAG: The precompiler does not check to see whether SQL statements
conform to IBM SQL syntax.
*NONE: The precompiler does not check to see whether SQL statements
conform to ANSI standards.
*SOURCE: The SQL precompiler provides the source views for the root and if
necessary, SQL INCLUDE statements. A view is provided which contains the
statements generated by the precompiler.
USRPRF
Specifies the user profile that is used when the compiled program object is run,
including the authority that the program object has for each object in static SQL
statements. The profile of either the program owner or the program user is used
to control which objects can be used by the program object.
*USER: The profile of the user running the program object is used.
*OWNER: The user profiles of both the program owner and the program user
are used when the program is run.
DYNUSRPRF
Specifies the user profile to be used for dynamic SQL statements.
*OWNER: For local programs, dynamic SQL statements run under the profile of
the program’s owner. For distributed programs, dynamic SQL statements run
under the profile of the SQL package’s owner.
SRTSEQ
Specifies the sort sequence table to be used for string comparisons in SQL
statements.
*JOB: The SRTSEQ value for the job is retrieved during the precompile.
*JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
For distributed applications, SRTSEQ(*JOBRUN) is valid only when
LANGID(*JOBRUN) is also specified.
*LANGIDUNQ: The unique-weight sort table for the language specified on the
LANGID parameter is used.
The name of the table name can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
*LANGIDSHR: The sort sequence table uses the same weight for multiple
characters, and is the shared-weight sort sequence table associated with the
language specified on the LANGID parameter.
*HEX: A sort sequence is not used. The hexadecimal values of the characters
are used to determine the sort sequence.
*JOB: The LANGID value for the job is retrieved during the precompile.
*JOBRUN: The LANGID value for the job is retrieved when the program is run.
For distributed applications, LANGID(*JOBRUN) is valid only when
SRTSEQ(*JOBRUN) is also specified.
The name of the printer file can be qualified by one of the following library
values:
*LIBL All libraries in the job’s library list are searched until the first match is
found.
*CURLIB The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
printer-file-name: Specify the name of the printer device file to which the
precompiler printout is directed.
| TOSRCFILE
| Specifies the qualified name of the source file that is to contain the output
| source member that has been processed by the SQL precompiler. If the
| specified source file is not found, it will be created. The output member will
| have the same name as the name that is specified for the SRCMBR parameter.
| source-file-name: Specify the name of the source file to contain the output
| source member.
TEXT
Specifies the text that briefly describes the printer file. More information on this
parameter is in Appendix A, ″Expanded Parameter Descriptions″ in the CL
Reference (Abridged) book.
*SRCMBRTXT: The text is taken from the source file member being used to
create the COBOL program. Text can be added or changed for a database
source member by using the Start Source Entry Utility (STRSEU) command, or
by using either the Add Physical File Member (ADDPFM) or Change Physical
File Member (CHGPFM) command. If the source file is an inline file or a device
file, the text is blank.
Example
CRTSQLCBLI PAYROLL OBJTYPE(*MODULE) TEXT('Payroll Program')
This command runs the SQL precompiler which precompiles the source and stores
the changed source in member PAYROLL in file QSQLTEMP in library QTEMP. The
ILE COBOL compiler is called to create module PAYROLL in the current library by
using the source member created by the SQL precompiler.
*CURLIB/
ÊÊ CRTSQLCI OBJ( object-name ) Ê
library-name/
Ê Ê
*LIBL/ QCSRC
SRCFILE( source-file-name )
*CURLIB/
library-name/
(1)
Ê Ê
*OBJ
SRCMBR( source-file-member-name )
Ê Ê
OPTION( OPTION Details ) *CURRENT
TGTRLS( *PRV )
VxRxMx
Ê Ê
*MODULE
OBJTYPE( *PGM )
*SRVPGM
Ê Ê
*LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
Ê Ê
*OPTIMIZE *ALLREAD
ALWCPYDTA( *YES ) ALWBLK( *NONE )
*NO *READ
Ê Ê
*NO 10
DLYPRP( *YES ) GENLVL( severity-level )
Ê Ê
*SRCFILE *JOB
MARGINS( left-right ) DATFMT( *USA )
*ISO
*EUR
*JIS
*MDY
*DMY
*YMD
*JUL
Ê Ê
*JOB *HMS
DATSEP( '/' ) TIMFMT( *USA )
'.' *ISO
',' *EUR
'-' *JIS
' '
*BLANK
Ê Ê
*JOB *YES
TIMSEP( ':' ) REPLACE( *NO )
'.'
','
' '
*BLANK
Ê Ê
*LOCAL *CURRENT
RDB( relational-database-name ) USER( user-name )
*NONE
Ê Ê
*NONE *NO
DFTRDBCOL( collection-name ) DYNDFTCOL( *YES )
Ê Ê
*OBJ
SQLPKG( package-name )
Ê Ê
*NAMING
SQLPATH( *LIBL )
· collection-name
Ê Ê
*NOFLAG *NONE
SAAFLAG( *FLAG ) FLAGSTD( *ANS )
Ê Ê
*NONE *NAMING
DBGVIEW( *SOURCE ) USRPRF( *OWNER )
*USER
Ê Ê
*USER
DYNUSRPRF( *OWNER )
Ê Ê
*JOB
SRTSEQ( *JOBRUN )
*LANGIDUNQ
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
Ê Ê
*JOB *NONE
LANGID( *JOBRUN ) OUTPUT( *PRINT )
language-identifier
Ê Ê
QTEMP/ QSQLTEMP
TOSRCFILE( source-file-name )
*LIBL/
*CURLIB/
library-name/
Ê ÊÍ
*SRCMBRTXT
TEXT( *BLANK )
'description'
OPTION Details:
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Create Structured Query Language ILE C Object (CRTSQLCI) command calls
the Structured Query Language (SQL) precompiler that precompiles C source
containing SQL statements, produces a temporary source member, and then
optionally calls the ILE C compiler to create a module, create a program, or create
a service program.
Parameters
OBJ
Specifies the qualified name of the object being created.
The name of the object can be qualified by one of the following library values:
*CURLIB: The object is created in the current library for the job. If no library
is specified as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library where the object is created.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
QCSRC: If the source file name is not specified, the IBM-supplied source file
QCSRC contains the C source.
source-file-name: Specify the name of the source file that contains the C
source.
SRCMBR
Specifies the name of the source file member that contains the C source. This
parameter is only specified if the source file name in the SRCFILE parameter is
a database file. If this parameter is not specified, the OBJ name specified on
the OBJ parameter is used.
*OBJ: Specifies that the C source is in the member of the source file that has
the same name as that specified on the OBJ parameter.
*GEN: The precompiler creates the object that is specified by the OBJTYPE
parameter.
*NOGEN: The precompiler does not call the C compiler, and a module,
program, service program, or SQL package is not created.
*PERIOD: The value used as the decimal point for numeric constants in SQL
statements is a period.
*SYSVAL: The value used as the decimal point for numeric constants in SQL
statements is the QDECFMT system value.
Note: If QDECFMT specifies that the value used as the decimal point is a
comma, any numeric constants in lists (such as in the SELECT clause or
the VALUES clause) must be separated by a comma followed by a
blank. For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) in which the decimal point is a period.
*COMMA: The value used as the decimal point for numeric constants in SQL
statements is a comma.
Note: Any numeric constants in lists (such as in the SELECT clause or the
VALUES clause) must be separated by a comma followed by a blank.
For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) where the decimal point is a period.
*SECLVL: Second-level text with replacement data is added for all messages
on the listing.
*CNULRQD: Output character and graphic host variables always contain the
NUL-terminator. If there is not enough space for the NUL-terminator, the data is
truncated and the NUL-terminator is added. Input character and graphic host
variables require a NUL-terminator.
*NOEVENTF: The compiler will not produce an event file for use by
CoOperative Development Environment/400 (CODE/400).
| *OPTLOB: The first FETCH for a cursor derermines how the cursor will be
| used for LOBs (Large Objects) on all subsequent FETCHes. This option
| remains in effect until the cursor is closed.
If the first FETCH uses a LOB locator to access a LOB column, no subsequent
FETCH for that cursor can fetch that LOB column into a LOB host variable.
If the first FETCH places the LOB column into a LOB host variable, no
subsequent FETCH for that cursor can use a LOB locator for that column.
In the examples given for the *CURRENT and *PRV values, and when
specifying the release-level value, the format VxRxMx is used to specify the
release, where Vx is the version, Rx is the release, and Mx is the modification
level. For example, V2R3M0 is version 2, release 3, modification level 0.
release-level: Specify the release in the format VxRxMx. The object can be
used on a system with the specified release or with any subsequent release of
the operating system installed.
Valid values depend on the current version, release, and modification level, and
they change with each new release. If you specify a release-level which is
earlier than the earliest release level supported by this command, an error
message is sent indicating the earliest supported release.
OBJTYPE
Specifies the type of object being created.
*MODULE: The SQL precompiler issues the CRTCMOD command to create the
module.
The user must create a source member in QSRVSRC that has the same name
as the name specified on the OBJ parameter. The source member must contain
the export information for the module. More information on the export file is in
the Integrated Language Environment*C/400 Programmers Guide.
Notes:
1. When OBJTYPE(*PGM) or OBJTYPE(*SRVPGM) is specified and the RDB
parameter is also specified, the CRTSQLPKG command is issued by the
SQL precompiler after the program has been created. When
OBJTYPE(*MODULE) is specified, an SQL package is not created and the
user must issue the CRTSQLPKG command after the CRTPGM or
CRTSRVPGM command has created the program.
2. If *NOGEN is specified, only the SQL temporary source member is
generated and a module, program, service program, or SQL package is not
created.
INCFILE
Specifies the qualified name of the source file that contains members included
in the program with any SQL INCLUDE statement.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
source-file-name: Specify the name of the source file that contains the source
file members specified on any SQL INCLUDE statement. The record length of
the source file specified here must be no less than the record length of the
source file specified on the SRCFILE parameter.
COMMIT
Specifies whether SQL statements in the compiled unit are run under
commitment control. Files referred to in the host language source are not
affected by this option. Only SQL tables, SQL views, and SQL packages
referred to in SQL statements are affected.
*CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows updated, deleted, and inserted are locked until the end of the unit of
work (transaction). A row that is selected, but not updated, is locked until the
next row is selected. Uncommitted changes in other jobs cannot be seen.
*RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows selected, updated, deleted, and inserted are locked until the end of the
unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
are locked exclusively until the end of the unit of work (transaction).
CLOSQLCSR
Specifies when SQL cursors are implicitly closed, SQL prepared statements are
implicitly discarded, and LOCK TABLE locks are released. SQL cursors are
explicitly closed when you issue the CLOSE, COMMIT, or ROLLBACK (without
HOLD) SQL statements.
*ENDACTGRP: SQL cursors are closed, SQL prepared statements are implicitly
discarded, and LOCK TABLE locks are released when the activation group
ends.
*ENDMOD: SQL cursors are closed and SQL prepared statements are implicitly
discarded when the module is exited. LOCK TABLE locks are released when
the first SQL program on the call stack ends.
| ALWCPYDTA
| Specifies whether a copy of the data can be used in a SELECT statement.
| *OPTIMIZE: The system determines whether to use the data retrieved directly
| from the database or to use a copy of the data. The decision is based on which
| method provides the best performance. If COMMIT is *CHG or *CS and
| ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
| data is used only when it is necessary to run a query.
| *NO: A copy of the data is not allowed. If a temporary copy of the data is
| required to perform the query, an error message is returned.
| ALWBLK
| Specifies whether the database manager can use record blocking, and the
| extent to which blocking can be used for read-only cursors.
| Specifying *ALLREAD:
| v Allows record blocking under commitment control level *CHG in addition to
| the blocking allowed for *READ.
| v Can improve the performance of almost all read-only cursors in programs,
| but limits queries in the following ways:
| – The Rollback (ROLLBACK) command, a ROLLBACK statement in host
| languages, or the ROLLBACK HOLD SQL statement does not reposition a
| read-only cursor when *ALLREAD is specified.
| – Dynamic running of a positioned UPDATE or DELETE statement (for
| example, using EXECUTE IMMEDIATE), cannot be used to update a row
| in a cursor unless the DECLARE statement for the cursor includes the
| FOR UPDATE clause.
| *NONE: Rows are not blocked for retrieval of data for cursors.
| Specifying *NONE:
| v Guarantees that the data retrieved is current.
| v May reduce the amount of time required to retrieve the first row of data for a
| query.
| v Stops the database manager from retrieving a block of data rows that is not
| used by the program when only the first few rows of a query are retrieved
| before the query is closed.
| v Can degrade the overall performance of a query that retrieves a large
| number of rows.
| *READ: Records are blocked for read-only retrieval of data for cursors when:
| v *NONE is specified on the COMMIT parameter, which indicates that
| commitment control is not used.
| v The cursor is declared with a FOR READ ONLY clause or there are no
| dynamic statements that could run a positioned UPDATE or DELETE
| statement for the cursor.
| Specifying *READ can improve the overall performance of queries that meet the
| above conditions and retrieve a large number of records.
DLYPRP
Specifies whether the dynamic statement validation for a PREPARE statement
is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
validation improves performance by eliminating redundant validation.
Note: If you specify *YES, performance is not improved if the INTO clause is
used on the PREPARE statement or if a DESCRIBE statement uses the
dynamic statement before an OPEN is issued for the statement.
GENLVL
Specifies the severity level at which the create operation fails. If errors occur
that have a severity level greater than this value, the operation ends.
| *SRCFILE: The precompiler uses file member margin values that are specified
| by the user on the SRCMBR parameter. The margin values default to 1 and 80.
left: Specify the beginning position for the statements. Valid values range from 1
through 80.
right: Specify the ending position for the statements. Valid values range from 1
through 80.
DATFMT
Specifies the format used when accessing date result columns. All output date
fields are returned in the specified format. For input date strings, the specified
value is used to determine whether the date is specified in a valid format.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
command to determine the current date format for the job.
Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
specified on the DATFMT parameter.
*JOB:The date separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
Note: An input time string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
Note: This parameter applies only when *HMS is specified on the TIMFMT
parameter.
’ ’: A blank ( ) is used.
*NO: A new SQL module, program, service program, or package is not created
if an object of the same name and type already exists in the specified library.
RDB
Specifies the name of the relational database where the SQL package object is
created.
*NONE: An SQL package object is not created. The program object is not a
distributed program and the Create Structured Query Language Package
(CRTSQLPKG) command cannot be used.
USER
Specifies the user name sent to the remote system when starting the
conversation. This parameter is valid only when RDB is specified.
*CURRENT: The user profile under which the current job is running is used.
user-name: Specify the user name being used for the application server job.
PASSWORD
Specifies the password to be used on the remote system. This parameter is
valid only if RDB is specified.
password: Specify the password of the user name specified on the USER
parameter.
RDBCNNMTH
Specifies the semantics used for CONNECT statements. Refer to the SQL
Reference, SC41-3612 book for more information.
*RUW: CONNECT (Type 1) semantics are used to support remote unit of work.
Consecutive CONNECT statements result in the previous connection being
disconnected before a new connection is established.
DFTRDBCOL
Specifies the collection name used for the unqualified names of tables, views,
indexes, and SQL packages. This parameter applies only to static SQL
statements.
collection-name: Specify the name of the collection identifier. This value is used
instead of the naming convention specified on the OPTION parameter.
| DYNDFTCOL
| Specifies whether the default collection name specified for the DFTRDBCOL
| parameter is also used for dynamic statements.
| *NO: Do not use the value specified on the DFTRDBCOL parameter for
| unqualified names of tables, views, indexes, and SQL packages for dynamic
| SQL statements. The naming convention specified on the OPTION parameter is
| used.
*OBJ: The name of the SQL package is the same as the object name specified
on the OBJ parameter.
package-name: Specify the name of the SQL package. If the remote system is
not an AS/400 system, no more than 8 characters can be specified.
| *NAMING: The path used depends on the naming convention specified on the
| OPTION parameter.
| For *SYS naming, the path used is *LIBL, the current library list at runtime.
| For *SQL naming, the path used is ″QSYS″, ″QSYS2″, ″userid″, where ″userid″
| is the value of the USER special register. If a collection-name is specified on
| the DFTRDBCOL parameter, the collection-name takes the place of userid.
*NOFLAG: The precompiler does not check to see whether SQL statements
conform to IBM SQL syntax.
*NONE: The precompiler does not check to see whether SQL statements
conform to ANSI standards.
*SOURCE: The SQL precompiler provides the source views for the root and if
necessary, SQL INCLUDE statements. A view is provided that contains the
statements generated by the precompiler.
USRPRF
Specifies the user profile that is used when the compiled program object is run,
including the authority that the program object has for each object in static SQL
*USER: The profile of the user running the program object is used.
*OWNER: The user profiles of both the program owner and the program user
are used when the program is run.
DYNUSRPRF
Specifies the user profile to be used for dynamic SQL statements.
*USER: Local dynamic SQL statements are run under the profile of the
program’s user. Distributed dynamic SQL statements are run under the profile
of the SQL package’s user.
*OWNER: Local dynamic SQL statements are run under the profile of the
program’s owner. Distributed dynamic SQL statements are run under the profile
of the SQL package’s owner.
SRTSEQ
Specifies the sort sequence table to be used for string comparisons in SQL
statements.
*JOB: The SRTSEQ value for the job is retrieved during the precompile.
*JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
For distributed applications, SRTSEQ(*JOBRUN) is valid only when
LANGID(*JOBRUN) is also specified.
*HEX: A sort sequence table is not used. The hexadecimal values of the
characters are used to determine the sort sequence.
*LANGIDSHR: The sort sequence table uses the same weight for multiple
characters, and is the shared-weight sort sequence table associated with the
language specified on the LANGID parameter.
*LANGIDUNQ: The unique-weight sort table for the language specified on the
LANGID parameter is used.
The name of the table name can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of hte library to be searched.
*JOB: The LANGID value for the job is retrieved during the precompile.
*JOBRUN: The LANGID value for the job is retrieved when the program is run.
For distributed applications, LANGID(*JOBRUN) is valid only when
SRTSEQ(*JOBRUN) is also specified.
The name of the printer file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
printer-file-name: Specify the name of the printer device file to which the
precompiler printout is directed.
| TOSRCFILE
| Specifies the qualified name of the source file that is to contain the output
| source member that the SQL precompiler has processed. If the precompiler
| cannot find the specified source file, it creates the file. The output member will
| have the same name as the name that is specified for the SRCMBR parameter.
*SRCMBRTXT: The text is taken from the source file member being used to
create the C program. Text can be added or changed for a database source
member by using the Start Source Entry Utility (STRSEU) command, or by
using either the Add Physical File Member (ADDPFM) command or the Change
Physical File Member (CHGPFM) command. If the source file is an inline file or
a device file, the text is blank.
Example
CRTSQLCI PAYROLL OBJTYPE(*MODULE)
TEXT('Payroll Program')
This command runs the SQL precompiler which precompiles the source and stores
the changed source in member PAYROLL in file QSQLTEMP in library QTEMP. The
ILE C for AS/400 compiler is called to create module PAYROLL in the current library
| by using the source member created by the SQL precompiler.
|
| CRTSQLCPPI (Create Structured Query Language C++ Object)
| Command
| Job: B,I Pgm: B,I REXX: B,I Exec
| *CURLIB/
| ÊÊ CRTSQLCPPI OBJ( object-name ) Ê
library-name/
| Ê Ê
| *LIBL/ QCSRC
SRCFILE( source-file-name )
*CURLIB/
library-name/
| (1)
| Ê Ê
*OBJ
SRCMBR( source-file-member-name )
| Ê Ê
| *LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
| Ê Ê
| *UR *ENDACTGRP
*CHG CLOSQLCSR( *ENDMOD )
COMMIT( *ALL )
*RS
*CS
*NONE
*NC
*RR
| Ê Ê
| *OPTIMIZE *ALLREAD
ALWCPYDTA( *YES ) ALWBLK( *NONE )
*NO *READ
| Ê Ê
| *NO 10
DLYPRP( *YES ) GENLVL( severity-level )
| Ê Ê
| *SRCFILE *JOB
MARGINS( left-right ) DATFMT( *USA )
*ISO
*EUR
*JIS
*MDY
*DMY
*YMD
*JUL
| Ê Ê
| *JOB *HMS
DATSEP( '/' ) TIMFMT( *USA )
'.' *ISO
',' *EUR
'-' *JIS
' '
*BLANK
|
| Ê Ê
| *YES *LOCAL
REPLACE( *NO ) RDB( relational-database-name )
*NONE
| Ê Ê
| *CURRENT *NONE
USER( user-name ) PASSWORD( password )
| Ê Ê
| *DUW *NONE
RDBCNNMTH( *RUW ) DFTRDBCOL( collection-name )
| Ê Ê
| *NO
DYNDFTCOL( *YES )
| Ê Ê
| *OBJLIB/ *OBJ
SQLPKG( package-name )
library-name/
| Ê Ê
| *NAMING
SQLPATH( *LIBL )
· collection-name
| Ê Ê
| *NOFLAG *NONE
SAAFLAG( *FLAG ) FLAGSTD( *ANS )
| Ê Ê
| *NONE *NAMING
DBGVIEW( *SOURCE ) USRPRF( *OWNER )
*USER
| Ê Ê
| *JOB
SRTSEQ( *JOBRUN )
*LANGIDUNQ
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
| Ê Ê
| *JOB *NONE
LANGID( *JOBRUN ) OUTPUT( *PRINT )
language-identifier
| Ê Ê
| *LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
|
| Ê Ê
| QTEMP/ QSQLTEMP
TOSRCFILE( source-file-name )
*LIBL/
*CURLIB/
library-name/
| Ê ÊÍ
| *SRCMBRTXT
TEXT( *BLANK )
'description'
| OPTION Details:
| Purpose
| The Create Structured Query Language C++ Object (CRTSQLCPPI) command calls
| the Structured Query Language (SQL) precompiler. The SQL precompiler
| precompiles C++ source containing SQL statements, produces a temporary source
| member, and then optionally calls the C++ compiler to create a module.
| To precompile for the VisualAge C++ for OS/400 compiler, use the CVTSQLCPP
| command.
| Parameters
| OBJ
| Specifies the qualified name of the object that the precompiler creates.
| One of the following library values can qualify the name of the object:
| *CURLIB The object is created in the current library for the job. If you do not
| specify a library as the current library for the job, the precompiler uses
| QGPL library.
| library-name: Specify the name of the library where the object is created.
| object-name: Specify the name of the object that the precompiler creates.
| SRCFILE
| Specifies the qualified name of the source file that contains the C++ source with
| SQL statements.
| One of the following library values can qualify the name of the source file:
| *LIBL: The precompiler searches all libraries in the job’s library list until it
| finds the first match.
| *CURLIB: The precompiler searches the current library for the job. If you do
| not specify a library as the current library for the job, it uses the QGPL
| library.
| library-name: Specify the name of the library that the precompiler searches.
| QCSRC: If you do not specify the source file name, the IBM-supplied source file
| QCSRC contains the C++ source.
| source-file-name: Specify the name of the source file that contains the C++
| source.
| SRCMBR
| Specifies the name of the source file member that contains the C++ source.
| Specify this parameter only if the source file name in the SRCFILE parameter is
| a database file. If you do not specify this parameter, the precompiler uses the
| OBJ name that is specified on the OBJ parameter.
| *OBJ: Specifies that the C++ source is in the member of the source file that
| has the same name as the file specified on the OBJ parameter.
| *NOGEN: The precompiler does not call the C++ compiler, and does not create
| a module.
| *JOB: The value used as the decimal point for numeric constants in SQL is the
| representation of decimal point that is specified for the job at precompile time.
| Note: If the job specifies that the value used as the decimal point is a comma,
| any numeric constants in lists (such as in the SELECT clause or the
| VALUES clause) must be separated by a comma followed by a blank.
| For example, VALUES(1,1, 2,23, 4,1) is equivalent to
| VALUES(1.1,2.23,4.1) in which the decimal point is a period.
| *PERIOD:The value used as the decimal point for numeric constants in SQL
| statements is a period.
| *COMMA: The value used as the decimal point for numeric constants in SQL
| statements is a comma.
| Note: Any numeric constants in lists (such as in the SELECT clause or the
| VALUES clause) must be separated by a comma followed by a blank.
| For example, VALUES(1,1, 2,23, 4,1) is equivalent to
| VALUES(1.1,2.23,4.1) where the decimal point is a period.
| *SECLVL: Second-level text with replacement data is added for all messages
| on the listing.
| *CNULRQD: Output character and graphic host variables always contain the
| NUL-terminator. If there is not enough space for the NUL-terminator, the data is
| truncated, and the NUL-terminator is added. Input character and graphic host
| variables require a NUL-terminator.
| *NOEVENTF: The compiler will not produce an event file for use by
| CoOperative Development Environment/400 (CODE/400).
| *OPTLOB: The first FETCH for a cursor derermines how the cursor will be
| used for LOBs (Large Objects) on all subsequent FETCHes. This option
| remains in effect until the cursor is closed.
| If the first FETCH uses a LOB locator to access a LOB column, no subsequent
| FETCH for that cursor can fetch that LOB column into a LOB host variable.
| If the first FETCH places the LOB column into a LOB host variable, no
| subsequent FETCH for that cursor can use a LOB locator for that column.
| The examples given for the *CURRENT value, as well as the release-level
| value, use the format VxRxMx to specify the release. In this format, Vx is the
| version, Rx is the release, and Mx is the modification level. For example,
| V2R3M0 is version 2, release 3, modification level 0.
| release-level: Specify the release in the format VxRxMx. The object can be
| used on a system with the specified release or with any subsequent release of
| the operating system installed.
| Valid values depend on the current version, release, and modification level, and
| they change with each new release. If you specify a release-level which is
| earlier than the earliest release level that is supported by this command, an
| error message is sent indicating the earliest supported release.
| INCFILE
| Specifies the qualified name of the source file that contains members that are
| included in the program with any SQL INCLUDE statement.
| One of the following library values can qualify the name of the source file:
| *LIBL: All libraries in the job’s library list are searched until the first match is
| found.
| *CURLIB: The current library for the job is searched. If no library is specified
| as the current library for the job, the QGPL library is used.
| library-name: Specify the name of the library to be searched.
| source-file-name: Specify the name of the source file that contains the source
| file members that are specified on any SQL INCLUDE statement. The record
| length of the source file that is specified here must be no less than the record
| length of the source file specified on the SRCFILE parameter.
| COMMIT
| Specifies whether SQL statements in the compiled unit are run under
| commitment control. Files referred to in the host language source are not
| affected by this option. Only SQL tables, SQL views, and SQL packages
| referred to in SQL statements are affected.
| *CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
| CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
| the rows updated, deleted, and inserted are locked until the end of the unit of
| work (transaction). A row that is selected, but not updated, is locked until the
| next row is selected. Uncommitted changes in other jobs cannot be seen.
| *RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
| CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
| the rows selected, updated, deleted, and inserted are locked until the end of the
| unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
| All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
| are locked exclusively until the end of the unit of work (transaction).
| CLOSQLCSR
| Specifies when SQL cursors are implicitly closed, SQL prepared statements are
| implicitly discarded, and LOCK TABLE locks are released. SQL cursors are
| explicitly closed when you issue the CLOSE, COMMIT, or ROLLBACK (without
| HOLD) SQL statements.
| *ENDACTGRP: SQL cursors are closed, SQL prepared statements are implicitly
| discarded, and LOCK TABLE locks are released when the activation group
| ends.
| *ENDMOD: SQL cursors are closed, and SQL prepared statements are
| implicitly discarded when the module is exited. LOCK TABLE locks are released
| when the first SQL program on the call stack ends.
| ALWCPYDTA
| Specifies whether a copy of the data can be used in a SELECT statement.
| *OPTIMIZE: The system determines whether to use the data retrieved directly
| from the database or to use a copy of the data. The decision is based on which
| method provides the best performance. If COMMIT is *CHG or *CS and
| ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
| data is used only when it is necessary to run a query.
| *NO: A copy of the data is not allowed. If a temporary copy of the data is
| required to perform the query, an error message is returned.
| ALWBLK
| Specifies whether the database manager can use record blocking, and the
| extent to which blocking can be used for read-only cursors.
| Specifying *ALLREAD:
| v Allows record blocking under commitment control level *CHG in addition to
| the blocking allowed for *READ.
| v Can improve the performance of almost all read-only cursors in programs,
| but limits queries in the following ways:
| *NONE: Rows are not blocked for retrieval of data for cursors.
| Specifying *NONE:
| v Guarantees that the data retrieved is current.
| v May reduce the amount of time required to retrieve the first row of data for a
| query.
| v Stops the database manager from retrieving a block of data rows that is not
| used by the program when only the first few rows of a query are retrieved
| before the query is closed.
| v Can degrade the overall performance of a query that retrieves a large
| number of rows.
| *READ: Records are blocked for read-only retrieval of data for cursors when:
| v *NONE is specified on the COMMIT parameter, which indicates that
| commitment control is not used.
| v The cursor is declared with a FOR READ ONLY clause or there are no
| dynamic statements that could run a positioned UPDATE or DELETE
| statement for the cursor.
| Specifying *READ can improve the overall performance of queries that meet the
| above conditions and retrieve a large number of records.
| DLYPRP
| Specifies whether the dynamic statement validation for a PREPARE statement
| is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
| validation improves performance by eliminating redundant validation.
| Note: If you specify *YES, performance is not improved if the INTO clause is
| used on the PREPARE statement or if a DESCRIBE statement uses the
| dynamic statement before an OPEN is issued for the statement.
| *SRCFILE: The file member margin values specified by the user on the
| SRCMBR parameter are used. If the member is of SQLCLE, SQLC, C, or CLE
| source type, the margin values are the values that are specified on the SEU
| services display. If the member is a different source type, the margin values are
| the default values of 1 and 80.
| left: Specify the beginning position for the statements. Valid values range from 1
| through 80.
| right: Specify the ending position for the statements. Valid values range from 1
| through 80.
| DATFMT
| Specifies the format used when accessing date result columns. All output date
| fields are returned in the specified format. For input date strings, the specified
| value is used to determine whether the date is specified in a valid format.
| Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
| always valid.
| *JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
| command to determine the current date format for the job.
| Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
| specified on the DATFMT parameter.
| *JOB:The date separator specified for the job at precompile time is used. Use
| the Display Job (DSPJOB) command to determine the current value for the job.
| ’ ’: A blank ( ) is used.
| Note: An input time string that uses the format *USA, *ISO, *EUR, or *JIS is
| always valid.
| Note: This parameter applies only when *HMS is specified on the TIMFMT
| parameter.
| *JOB: The time separator specified for the job at precompile time is used. Use
| the Display Job (DSPJOB) command to determine the current value for the job.
| ’ ’: A blank ( ) is used.
| *YES: A new SQL module is created, and any existing object of the same name
| in the specified library is moved to QRPLOBJ.
| *NO: A new SQL module is not created if an object of the same name already
| exists in the specified library.
| RDB
| Specifies the name of the relational database where the SQL package object is
| created.
| *NONE: An SQL package object is not created. The program object is not a
| distributed program and the Create Structured Query Language Package
| (CRTSQLPKG) command cannot be used.
| USER
| Specifies the user name sent to the remote system when starting the
| conversation. This parameter is valid only when RDB is specified.
| *CURRENT: The user profile under which the current job is running is used.
| user-name: Specify the user name being used for the application server job.
| PASSWORD
| Specifies the password to be used on the remote system. This parameter is
| valid only if RDB is specified.
| password: Specify the password of the user name that is specified on the
| USER parameter.
| *RUW: CONNECT (Type 1) semantics are used to support remote unit of work.
| Consecutive CONNECT statements result in the previous connection being
| disconnected before a new connection is established.
| DFTRDBCOL
| Specifies the collection name used for the unqualified names of tables, views,
| indexes, and SQL packages. This parameter applies only to static SQL
| statements.
| collection-name: Specify the name of the collection identifier. This value is used
| instead of the naming convention that is specified on the OPTION parameter.
| DYNDFTCOL
| Specifies whether the default collection name specified for the DFTRDBCOL
| parameter is also used for dynamic statements.
| *NO: Do not use the value specified on the DFTRDBCOL parameter for
| unqualified names of tables, views, indexes, and SQL packages for dynamic
| SQL statements. The naming convention specified on the OPTION parameter is
| used.
| *OBJ: The name of the SQL package is the same as the object name specified
| on the OBJ parameter.
| package-name: Specify the name of the SQL package. If the remote system is
| not an AS/400 system, no more than 8 characters can be specified.
| SQLPATH
| Specifies the path to be used to find procedures, functions, and user defined
| types in static SQL statements.
| *NAMING: The path used depends on the naming convention specified on the
| OPTION parameter.
| For *SQL naming, the path used is ″QSYS″, ″QSYS2″, ″userid″, where ″userid″
| is the value of the USER special register. If a collection-name is specified on
| the DFTRDBCOL parameter, the collection-name takes the place of userid.
| *NOFLAG: The precompiler does not check to see whether SQL statements
| conform to IBM SQL syntax.
| *NONE: The precompiler does not check to see whether SQL statements
| conform to ANSI standards.
| *SOURCE: The SQL precompiler provides the source views for the root and if
| necessary, SQL INCLUDE statements. A view is provided that contains the
| statements generated by the precompiler.
| USRPRF
| Specifies the user profile that is used when the compiled program object is run,
| including the authority that the program object has for each object in static SQL
| statements. The profile of either the program owner or the program user is used
| to control which objects can be used by the program object.
| *USER: The profile of the user running the program object is used.
| *USER: Local dynamic SQL statements are run under the profile of the
| program’s user. Distributed dynamic SQL statements are run under the profile
| of the SQL package’s user.
| *OWNER: Local dynamic SQL statements are run under the profile of the
| program’s owner. Distributed dynamic SQL statements are run under the profile
| of the SQL package’s owner.
| SRTSEQ
| Specifies the sort sequence table to be used for string comparisons in SQL
| statements.
| *JOB: The SRTSEQ value for the job is retrieved during the precompile.
| *JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
| For distributed applications, SRTSEQ(*JOBRUN) is valid only when
| LANGID(*JOBRUN) is also specified.
| *HEX: A sort sequence table is not used. The hexadecimal values of the
| characters are used to determine the sort sequence.
| *LANGIDSHR: The sort sequence table uses the same weight for multiple
| characters, and is the shared-weight sort sequence table associated with the
| language specified on the LANGID parameter.
| *LANGIDUNQ: The unique-weight sort table for the language that is specified
| on the LANGID parameter is used.
| The name of the table name can be qualified by one of the following library
| values:
| *LIBL: All libraries in the job’s library list are searched until the first match is
| found.
| *CURLIB: The current library for the job is searched. If no library is specified
| as the current library for the job, the QGPL library is used.
| library-name: Specify the name of hte library to be searched.
| *JOB: The LANGID value for the job is retrieved during the precompile.
| *JOBRUN: The LANGID value for the job is retrieved when the program is run.
| For distributed applications, LANGID(*JOBRUN) is valid only when
| SRTSEQ(*JOBRUN) is also specified.
| The name of the printer file can be qualified by one of the following library
| values:
| *LIBL: All libraries in the job’s library list are searched until the first match is
| found.
| *CURLIB: The current library for the job is searched. If no library is specified
| as the current library for the job, the QGPL library is used.
| library-name: Specify the name of the library to be searched.
| printer-file-name: Specify the name of the printer device file to which the
| precompiler printout is directed.
| TOSRCFILE
| Specifies the qualified name of the source file that is to contain the output
| source member that has been processed by the SQL precompiler. If the
| specified source file is not found, it will be created. The output member will
| have the same name as the name that is specified for the SRCMBR parameter.
| source-file-name: Specify the name of the source file to contain the output
| source member.
| TEXT
| Specifies the text that briefly describes the program and the function. More
| information on this parameter is in Appendix A, ″Expanded Parameter
| Descriptions″ in the CL Reference book.
| *SRCMBRTXT: The text is taken from the source file member being used to
| create the C++ program. You can add or change text for a database source
| Example
| CRTSQLCPPI PAYROLL OBJTYPE(*MODULE)
| TEXT('Payroll Program')
| This command runs the SQL precompiler which precompiles the source and stores
| the changed source in member PAYROLL in file QSQLTEMP in library QTEMP. The
| command calls the ILE C++ compiler to create module PAYROLL in the current
| library by using the source member that is created by the SQL precompiler.
*CURLIB/
ÊÊ CRTSQLPLI PGM( program-name ) Ê
library-name/
Ê Ê
*LIBL/ QPLISRC
SRCFILE( source-file-name )
*CURLIB/
library-name/
(1)
Ê Ê
*PGM
SRCMBR( source-file-member-name )
Ê Ê
OPTION( Option Details *CURRENT
TGTRLS( *PRV )
VxRxMx
Ê Ê
*LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
Ê Ê
*OPTIMIZE *ALLREAD
ALWCPYDTA( *YES ) ALWBLK( *NONE )
*NO *READ
Ê Ê
*NO 10
DLYPRP( *YES ) GENLVL( severity-level )
Ê Ê
*SRCFILE *JOB
MARGINS( left-right ) DATFMT( *USA )
*ISO
*EUR
*JIS
*MDY
*DMY
*YMD
*JUL
Ê Ê
*JOB *HMS
DATSEP( '/' ) TIMFMT( *USA )
'.' *ISO
',' *EUR
'-' *JIS
' '
*BLANK
Ê Ê
*JOB
TIMSEP( ':' )
'.'
','
' '
*BLANK
Ê Ê
*YES *LOCAL
REPLACE( *NO ) RDB( relational-database-name )
*NONE
Ê Ê
*DUW *NONE
RDBCNNMTH( *RUW ) DFTRDBCOL( collection-name )
Ê Ê
*NO
DYNDFTCOL( *YES )
Ê Ê
*PGMLIB/ *PGM
SQLPKG( package-name )
library-name/
Ê Ê
*NAMING
SQLPATH( *LIBL )
· collection-name
Ê Ê
*NOFLAG *NONE
SAAFLAG( *FLAG ) FLAGSTD( *ANS )
Ê Ê
*LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
Ê Ê
*JOB
SRTSEQ( *JOBRUN )
*LANGIDUNQ
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
Ê Ê
*JOB *NAMING
LANGID( *JOBRUN ) USRPRF( *OWNER )
language-ID *USER
Ê Ê
QTEMP/ QSQLTEMP
TOSRCFILE( source-file-name )
*LIBL/
*CURLIB/
library-name/
Ê ÊÍ
*SCRMBRTXT
TEXT( *BLANK )
'description'
Option Details:
*NOSRC
*NOSOURCE *NOXREF *GEN *JOB *SYS
Ê
*SRC *XREF *NOGEN *PERIOD *SQL
*SOURCE *SYSVAL
*COMMA
*NOSECLVL *OPTLOB
Ê )
*SECLVL *NOOPTLOB
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Create Structured Query Language PL/I (CRTSQLPLI) command calls a
Structured Query Language (SQL) precompiler, which precompiles PL/I source
containing SQL statements, produces a temporary source member, and optionally
calls the PL/I compiler to compile the program.
Parameters
PGM
Specifies the qualified name of the compiled program.
The name of the compiled PL/I program can be qualified by one of the following
library values:
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library where the compiled PL/I
program is created.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
QPLISRC: If the source file name is not specified, the IBM-supplied source file
QPLISRC contains the PL/I source.
source-file-name: Specify the name of the source file that contains the PL/I
source.
SRCMBR
Specifies the name of the source file member that contains the PL/I source.
This parameter is specified only if the source file name in the SRCFILE
parameter is a database file. If this parameter is not specified, the PGM name
specified on the PGM parameter is used.
*PGM: Specifies that the PL/I source is in the member of the source file that
has the same name as that specified on the PGM parameter.
*GEN: The compiler creates a program that can run after the program is
compiled. An SQL package object is created if a relational database name is
specified on the RDB parameter.
*JOB: The value used as the decimal point for numeric constants in SQL is the
representation of decimal point specified for the job at precompile time.
*PERIOD: The value used as the decimal point for numeric constants used in
SQL statements is a period.
*SYSVAL: The value used as the decimal point for numeric constants in SQL
statements is the QDECFMT system value.
Note: If QDECFMT specifies that the value used as the decimal point is a
comma, any numeric constants in lists (such as in the SELECT clause or
the VALUES clause) must be separated by a comma followed by a
blank. For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) in which the decimal point is a period.
*COMMA: The value used as the decimal point for numeric constants in SQL
statements is a comma.
Note: Any numeric constants in lists (such as in the SELECT clause or the
VALUES clause) must be separated by a comma followed by a blank.
For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) where the decimal point is a period.
*SECLVL: Second-level text with replacement data is added to the printout for
all messages on the listing.
*OPTLOB: The first FETCH for a cursor determines how the cursor will be used
for LOBs (Large Objects) on all subsequent FETCHes. This option remains in
effect until the cursor is closed.
If the first FETCH uses a LOB locator to access a LOB column, no subsequent
FETCH for that cursor can fetch that LOB column into a LOB host variable.
If the first FETCH places the LOB column into a LOB host variable, no
subsequent FETCH for that cursor can use a LOB locator for that column.
In the examples given for the *CURRENT and *PRV values, and when
specifying the release-level value, the format VxRxMx is used to specify the
release, where Vx is the version, Rx is the release, and Mx is the modification
level. For example, V2R3M0 is version 2, release 3, modification level 0.
release-level: Specify the release in the format VxRxMx. The object can be
used on a system with the specified release or with any subsequent release of
the operating system installed.
Valid values depend on the current version, release, and modification level, and
they change with each new release. If you specify a release-level which is
earlier than the earliest release level supported by this command, an error
message is sent indicating the earliest supported release.
INCFILE
Specifies the qualified name of the source file that contains members included
in the program with any SQL INCLUDE statement.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
source-file-name: Specify the name of the source file that contains the source
file members specified on any SQL INCLUDE statement. The record length of
the source file specified must be no less than the record length of the source
file specified for the SRCFILE parameter.
*CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows updated, deleted, and inserted are locked until the end of the unit of
work (transaction). A row that is selected, but not updated, is locked until the
next row is selected. Uncommitted changes in other jobs cannot be seen.
*RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows selected, updated, deleted, and inserted are locked until the end of the
unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
are locked exclusively until the end of the unit of work (transaction).
CLOSQLCSR
Specifies when SQL cursors are implicitly closed, SQL prepared statements are
implicitly discarded, and LOCK TABLE locks are released. SQL cursors are
explicitly closed when you issue the CLOSE, COMMIT, or ROLLBACK (without
HOLD) SQL statements.
*ENDPGM: SQL cursors are closed and SQL prepared statements are
discarded when the program ends. LOCK TABLE locks are released when the
first SQL program on the call stack ends.
*ENDSQL: SQL cursors remain open between calls and can be fetched without
running another SQL OPEN. One of the programs higher on the call stack must
have run at least one SQL statement. SQL cursors are closed, SQL prepared
statements are discarded, and LOCK TABLE locks are released when the first
SQL program on the call stack ends. If *ENDSQL is specified for a program that
is the first SQL program called (the first SQL program on the call stack), the
program is treated as if *ENDPGM was specified.
| *OPTIMIZE: The system determines whether to use the data retrieved directly
| from the database or to use a copy of the data. The decision is based on which
| method provides the best performance. If COMMIT is *CHG or *CS and
| ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
| data is used only when it is necessary to run a query.
| *NO: A copy of the data is not allowed. If a temporary copy of the data is
| required to perform the query, an error message is returned.
| ALWBLK
| Specifies whether the database manager can use record blocking, and the
| extent to which blocking can be used for read-only cursors.
| Specifying *ALLREAD:
| v Allows record blocking under commitment control level *CHG in addition to
| the blocking allowed for *READ.
| v Can improve the performance of almost all read-only cursors in programs,
| but limits queries in the following ways:
| – The Rollback (ROLLBACK) command, a ROLLBACK statement in host
| languages, or the ROLLBACK HOLD SQL statement does not reposition a
| read-only cursor when *ALLREAD is specified.
| – Dynamic running of a positioned UPDATE or DELETE statement (for
| example, using EXECUTE IMMEDIATE), cannot be used to update a row
| in a cursor unless the DECLARE statement for the cursor includes the
| FOR UPDATE clause.
| *NONE: Rows are not blocked for retrieval of data for cursors.
| Specifying *NONE:
| v Guarantees that the data retrieved is current.
| v May reduce the amount of time required to retrieve the first row of data for a
| query.
| v Stops the database manager from retrieving a block of data rows that is not
| used by the program when only the first few rows of a query are retrieved
| before the query is closed.
| v Can degrade the overall performance of a query that retrieves a large
| number of rows.
| Specifying *READ can improve the overall performance of queries that meet the
| above conditions and retrieve a large number of records.
DLYPRP
Specifies whether the dynamic statement validation for a PREPARE statement
is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
validation improves performance by eliminating redundant validation.
Note: If you specify *YES, performance is not improved if the INTO clause is
used on the PREPARE statement or if a DESCRIBE statement uses the
dynamic statement before an OPEN is issued for the statement.
GENLVL
Specifies the severity level at which the create operation fails. If errors occur
that have a severity level greater than or equal to this value, the operation
ends.
*SRCFILE: The file member margin values specified by the user on the
SRCMBR parameter are used. If the member is a SQLPLI source type, the
margin values are the values specified on the SEU services display. If the
member is a different source type, the margin values are the default values of 2
and 72.
left: Specify the beginning position for the statements. Valid values range from 1
through 80.
right: Specify the ending position for the statements. Valid values range from 1
through 80.
DATFMT
Specifies the format used when accessing date result columns. All output date
fields are returned in the specified format. For input date strings, the specified
value is used to determine whether the date is specified in a valid format.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
command to determine the current date format for the job.
Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
specified on the DATFMT parameter.
*JOB: The date separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*USA: The United States time format (hh:mm xx) is used, where xx is AM or
PM.
Note: This parameter applies only when *HMS is specified on the TIMFMT
parameter.
*JOB: The time separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
*YES: A new program or SQL package is created, and any existing program or
SQL package of the same name and type in the specified library is moved to
QRPLOBJ.
*NO: A new program or SQL package is not created if an object of the same
name and type already exists in the specified library.
*NONE: An SQL package object is not created. The program object is not a
distributed program and the Create Structured Query Language Package
(CRTSQLPKG) command cannot be used.
USER
Specifies the user name sent to the remote system when starting the
conversation. This parameter is valid only when RDB is specified.
*CURRENT: The user profile under which the current job is running is used.
user-name: Specify the user name being used for the application server job.
PASSWORD
Specifies the password to be used on the remote system. This parameter is
valid only if RDB is specified.
password: Specify the password of the user name specified on the USER
parameter.
RDBCNNMTH
Specifies the semantics used for CONNECT statements. Refer to the SQL
Reference book for more information.
*RUW: CONNECT (Type 1) semantics are used to support remote unit of work.
Consecutive CONNECT statements result in the previous connection being
disconnected before a new connection is established.
DFTRDBCOL
Specifies the collection name used for the unqualified names of tables, views,
indexes, and SQL packages. This parameter applies only to static SQL
statements.
collection-name: Specify the name of the collection identifier. This value is used
instead of the naming convention specified on the OPTION parameter.
| *NO: Do not use the value specified on the DFTRDBCOL parameter for
| unqualified names of tables, views, indexes, and SQL packages for dynamic
| SQL statements. The naming convention specified on the OPTION parameter is
| used.
| *NAMING: The path used depends on the naming convention specified on the
| OPTION parameter.
| For *SYS naming, the path used is *LIBL, the current library list at runtime.
| For *SQL naming, the path used is ″QSYS″, ″QSYS2″, ″userid″, where ″userid″
| is the value of the USER special register. If a collection-name is specified on
| the DFTRDBCOL parameter, the collection-name takes the place of userid.
*NOFLAG: The precompiler does not check to see whether SQL statements
conform to IBM SQL syntax.
*NONE: The precompiler does not check to see whether SQL statements
conform to ANSI standards.
The name of the printer file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
printer-file-name: Specify the name of the printer device file to which the
precompiler printout is directed.
SRTSEQ
Specifies the sort sequence table to be used for string comparisons in SQL
statements.
*JOB: The SRTSEQ value for the job is retrieved during the precompile.
*JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
For distributed applications, SRTSEQ(*JOBRUN) is valid only when
LANGID(*JOBRUN) is also specified.
*LANGIDUNQ: The unique-weight sort table for the language specified on the
LANGID parameter is used.
*HEX: A sort sequence table is not used. The hexadecimal values of the
characters are used to determine the sort sequence.
The name of the sort sequence table can be qualified by one of hte following
library values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
*JOB: The LANGID value for the job is retrieved during the precompile.
*JOBRUN: The LANGID value for the job is retrieved when the program is run.
For distributed applications, LANGID(*JOBRUN) is valid only when
SRTSEQ(*JOBRUN) is also specified.
*USER: The profile of the user running the program object is used.
*OWNER: The user profiles of both the program owner and the program user
are used when the program is run.
DYNUSRPRF
Specifies the user profile used for dynamic SQL statements.
*USER: Local dynamic SQL statements are run under the user profile of the
job. Distributed dynamic SQL statements are run under the user profile of the
application server job.
*OWNER: Local dynamic SQL statements are run under the user profile of the
program’s owner. Distributed dynamic SQL statements are run under the user
profile of the SQL package’s owner.
| TOSRCFILE
| Specifies the qualified name of the source file that is to contain the output
| source member that has been processed by the SQL precompiler. If the
| source-file-name: Specify the name of the source file to contain the output
| source member.
TEXT
Specifies the text that briefly describes the program and its function. More
information on this parameter is in Appendix A, ″Expanded Parameter
Descriptions″ in the CL Reference (Abridged) book.
*SCRMBRTXT: The text is taken from the source file member being used to
create the PL/I program. The user can add or change text for a database
source member by using the Start Source Entry Utility (STRSEU) command, or
by using either the Add Physical File Member (ADDPFM) or Change Physical
File Member (CHGPFM) command. If the source file is an inline file or a device
file, the text is blank.
Example
CRTSQLPLI PAYROLL TEXT('Payroll Program')
This command runs the SQL precompiler, which precompiles the source and stores
the changed source in member PAYROLL in file QSQLTEMP in library QTEMP. The
PL/I compiler is called to create program PAYROLL in the current library using the
source member created by the SQL precompiler.
*CURLIB/
ÊÊ CRTSQLRPG PGM( program-name ) Ê
library-name/
(1)
Ê Ê
*PGM
SRCMBR( source-file-member-name )
Ê Ê
OPTION( OPTION DETAILS ) *CURRENT
TGTRLS( *PRV )
VxRxMx
Ê Ê
*LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
Ê Ê
*UR *ENDPGM
*CHG CLOSQLCSR( *ENDSQL )
COMMIT( *ALL ) *ENDJOB
*RS
*CS
*NONE
*NC
*RR
Ê Ê
*OPTIMIZE *ALLREAD
ALWCPYDTA( *YES ) ALWBLK( *NONE )
*NO *READ
Ê Ê
*NO 10
DLYPRP( *YES ) GENLVL( severity-level )
Ê Ê
*JOB *JOB
DATFMT( *USA ) DATSEP( '/' )
*ISO '.'
*EUR ','
*JIS '-'
*MDY ' '
*DMY *BLANK
*YMD
*JUL
Ê Ê
*YES
REPLACE( *NO )
Ê Ê
*LOCAL *CURRENT
RDB( relational-database-name ) USER( user-name )
*NONE
Ê Ê
*NONE *DUW
PASSWORD( password ) RDBCNNMTH( *RUW )
Ê Ê
*NONE *NO
DFTRDBCOL( collection-name ) DYNDFTCOL( *YES )
Ê Ê
*PGMLIB/ *PGM
SQLPKG( package-name )
library-name/
Ê Ê
*NAMING
SQLPATH( *LIBL )
· collection-name
Ê Ê
*NOFLAG *NONE
SAAFLAG( *FLAG ) FLAGSTD( *ANS )
Ê Ê
*LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
Ê Ê
*JOB *NAMING
LANGID( *JOBRUN ) USRPRF( *OWNER )
language-ID *USER
Ê Ê
*USER
DYNUSRPRF( *OWNER )
Ê Ê
QTEMP/ QSQLTEMP
TOSRCFILE( source-file-name )
*LIBL/
*CURLIB/
library-name/
Ê ÊÍ
*SRCMBRTXT
TEXT( *BLANK )
'description'
OPTION Details:
*NOSRC
*NOSOURCE *NOXREF *GEN *JOB *SYS
Ê
*SOURCE *XREF *NOGEN *SYSVAL *SQL
*SRC *PERIOD
*COMMA
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Create Structured Query Language RPG (CRTSQLRPG) command calls the
Structured Query Language (SQL) precompiler which precompiles the RPG source
Parameters
PGM
Specifies the qualified name of the compiled program.
The name of the compiled RPG can be qualified by one of the following library
values:
*CURLIB: The compiled RPG program is created in the current library for
the job. If no library is specified as the current library for the job, the QGPL
library is used.
library-name: Specify the name of hte library where the compiled RPG
program is created.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
QRPGSRC: If the source file name is not specified, the IBM-supplied source file
QRPGSRC contains the RPG source.
source-file-name: Specify the name of the source file that contains the RPG
source.
SRCMBR
Specifies the name of hte source file member that contains the RPG source.
This parameter is specified only if the source file name in the SRCFILE
parameter is a database file. If this parameter is not specified, the PGM name
specified on the PGM parameter is used.
*PGM: Specifies that the RPG source is in the member of the source file that
has the same name as that specified on the PGM parameter.
*GEN: The compiler creates a program that can run after the program is
compiled. An SQL package object is created if a relational database name is
specified on the RDB parameter.
*NOGEN: The precompiler does not call the RPG compiler, and a program and
SQL package are not created.
*JOB: The value used as the decimal point for numeric constants in SQL is the
representation of decimal point specified for the job at precompile time.
*SYSVAL: The value used as the decimal point for numeric constants in SQL
statements is the QDECFMT system value.
Note: If QDECFMT specifies that the value used as the decimal point is a
comma, any numeric constants in lists (such as in the SELECT clause,
VALUES clause, and so on.) must be separated by a comma followed by
a blank. For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) where the decimal point is a period.
*PERIOD: The value used as the decimal point for numeric constants used in
SQL statements is a period.
*COMMA: The value used as the decimal point for numeric constants in SQL
statements is a comma.
Note: Any numeric constants in lists (such as in the SELECT clause, VALUES
clause, and so on.) must be separated by a comma followed by a blank.
For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) where the decimal point is a period.
*SECLVL: Second-level text with replacement data is added for all messages
on the listing.
*NOSEQSRC: Source sequence numbers from the input source files are used
when creating the new source member in QSQLTEMP.
*LSTDBG: The SQL precompiler generates a listing view and error and debug
information required for this view. You can use *LSTDBG only if you are using
the CODE/400 product to compile your program.
TGTRLS
Specifies the release of the operating system on which the user intends to use
the object being created.
In the examples given for the *CURRENT and *PRV values, and when
specifying the release-level value, the format VxRxMx is used to specify the
release, where Vx is the version, Rx is the release, and Mx is the modification
level. For example, V2R3M0 is version 2, release 3, modification level 0.
release-level: Specify the release in the format VxRxMx. The object can be
used on a system with the specified release or with any subsequent release of
the operating system installed.
Valid values depend on the current version, release, and modification level, and
they change with each new release. If you specify a release-level which is
earlier than the earliest release level supported by this command, an error
message is sent indicating the earliest supported release.
INCFILE
Specifies the qualified name of the source file that contains members included
in the program with any SQL INCLUDE statement.
source-file-name: Specify the name of the source file that contains the source
file members specified on any SQL INCLUDE statement. The record length of
the source file specified here must be no less than the record length of the
source file specified for the SRCFILE parameter.
COMMIT
Specifies whether SQL statements in the compiled program are run under
commitment control. Files referred to in the host language source are not
affected by this option. Only SQL tables, SQL views, and SQL packages
referred to in SQL statements are affected.
Note: Files referenced in the RPG source are not affected by this option.
*CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows updated, deleted, and inserted are locked until the end of the unit of
work (transaction). A row that is selected, but not updated, is locked until the
next row is selected. Uncommitted changes in other jobs cannot be seen.
*RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows selected, updated, deleted, and inserted are locked until the end of the
unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
are locked exclusively until the end of the unit of work (transaction).
*ENDPGM: SQL cursors are closed and SQL prepared statements are
discarded when the program ends. LOCK TABLE locks are released when the
first SQL program on the call stack ends.
*ENDSQL: SQL cursors remain open between calls and can be fetched without
running another SQL OPEN. One of the programs higher on the call stack must
have run at least one SQL statement. SQL cursors are closed, SQL prepared
statements are discarded, and LOCK TABLE locks are released when the first
SQL program on the call stack ends. If *ENDSQL is specified for a program that
is the first SQL program called (the first SQL program on the call stack), the
program is treated as if *ENDPGM was specified.
*ENDJOB: SQL cursors remain open between calls and can be fetched without
running another SQL OPEN. The programs higher on the call stack do not need
to have run SQL statements. SQL cursors are left open, SQL prepared
statements are preserved, and LOCK TABLE locks are held when the first SQL
program on the call stack ends. SQL cursors are closed, SQL prepared
statements are discarded, and LOCK TABLE locks are released when the job
ends.
| ALWCPYDTA
| Specifies whether a copy of the data can be used in a SELECT statement.
| *OPTIMIZE: The system determines whether to use the data retrieved directly
| from the database or to use a copy of the data. The decision is based on which
| method provides the best performance. If COMMIT is *CHG or *CS and
| ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
| data is used only when it is necessary to run a query.
| *NO: A copy of the data is not allowed. If a temporary copy of the data is
| required to perform the query, an error message is returned.
| ALWBLK
| Specifies whether the database manager can use record blocking, and the
| extent to which blocking can be used for read-only cursors.
| Specifying *ALLREAD:
| v Allows record blocking under commitment control level *CHG in addition to
| the blocking allowed for *READ.
| v Can improve the performance of almost all read-only cursors in programs,
| but limits queries in the following ways:
| *NONE: Rows are not blocked for retrieval of data for cursors.
| Specifying *NONE:
| v Guarantees that the data retrieved is current.
| v May reduce the amount of time required to retrieve the first row of data for a
| query.
| v Stops the database manager from retrieving a block of data rows that is not
| used by the program when only the first few rows of a query are retrieved
| before the query is closed.
| v Can degrade the overall performance of a query that retrieves a large
| number of rows.
| *READ: Records are blocked for read-only retrieval of data for cursors when:
| v *NONE is specified on the COMMIT parameter, which indicates that
| commitment control is not used.
| v The cursor is declared with a FOR READ ONLY clause or there are no
| dynamic statements that could run a positioned UPDATE or DELETE
| statement for the cursor.
| Specifying *READ can improve the overall performance of queries that meet the
| above conditions and retrieve a large number of records.
DLYPRP
Specifies whether the dynamic statement validation for a PREPARE statement
is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
validation improves performance by eliminating redundant validation.
Note: If you specify *YES, performance is not improved if the INTO clause is
used on the PREPARE statement or if a DESCRIBE statement uses the
dynamic statement before an OPEN is issued for the statement.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
command to determine the current date format for the job.
Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
specified on the DATFMT parameter.
*JOB: The date separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*USA: The United States time format (hh:mm xx) is used, where xx is AM or
PM.
Note: This parameter applies only when *HMS is specified on the TIMFMT
parameter.
*JOB: The time separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
*NO: A new program or SQL package is not created if an object of the same
name and type already exists in the specified library.
RDB
Specifies the name of the relational database where the SQL package object is
created.
*NONE: An SQL package object is not created. The program object is not a
distributed program and the Create Structured Query Language Package
(CRTSQLPKG) command cannot be used.
USER
Specifies the user name sent to the remote system when starting the
conversation. This parameter is valid only when RDB is specified.
*CURRENT: The user profile under which the current job is running is used.
user-name: Specify the user name being used for the application requester job.
PASSWORD
Specifies the password to be used on the remote system. This parameter is
valid only if RDB is specified.
password: Specify the password of the user name specified on the USER
parameter.
RDBCNNMTH
Specifies the semantics used for CONNECT statements. Refer to the SQL
Reference, SC41-3612 book for more information.
*RUW: CONNECT (Type 1) semantics are used to support remote unit of work.
Consecutive CONNECT statements result in the previous connection being
disconnected before a new connection is established.
collection-name: Specify the name of the collection identifier. This value is used
instead of the naming convention specified on the OPTION parameter.
| DYNDFTCOL
| Specifies whether the default collection name specified for the DFTRDBCOL
| parameter is also used for dynamic statements.
| *NO: Do not use the value specified on the DFTRDBCOL parameter for
| unqualified names of tables, views, indexes, and SQL packages for dynamic
| SQL statements. The naming convention specified on the OPTION parameter is
| used.
| *NAMING: The path used depends on the naming convention specified on the
| OPTION parameter.
| For *SYS naming, the path used is *LIBL, the current library list at runtime.
| For *SQL naming, the path used is ″QSYS″, ″QSYS2″, ″userid″, where ″userid″
| is the value of the USER special register. If a collection-name is specified on
| the DFTRDBCOL parameter, the collection-name takes the place of userid.
*NOFLAG: The precompiler does not check to see whether SQL statements
conform to IBM SQL syntax.
*NONE: The precompiler does not check to see whether SQL statements
conform to ANSI standards.
The name of the printer file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the printer device file to which the
compiler printout is directed.
printer-file-name: Specify the name of the printer device file to which the
compiler printout is directed.
SRTSEQ
Specifies the sort sequence table to be used for string comparisons in SQL
statements.
*JOB: The SRTSEQ value for the job is retrieved during the precompile.
*JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
For distributed applications, SRTSEQ(*JOBRUN) is valid only when
LANGID(*JOBRUN) is also specified.
*LANGIDSHR: The shared-weight sort table for the language specified on the
LANGID parameter is used.
*HEX: A sort sequence table is not used. The hexadecimal values of the
characters are used to determine the sort sequence.
The name of the sort sequence table can be qualified by one of the following
library values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
*JOB: The LANGID value for hte job is retrieved during the precompile.
*JOBRUN: The LANGID value for the job is retrieved when the program is run.
For distributed applications, LANGID(*JOBRUN) is valid only when
SRTSEQ(*JOBRUN) is also specified.
*USER: The profile of the user running the program object is used.
*OWNER: The user profiles of both the program owner and the program user
are used when the program is run.
DYNUSRPRF
Specifies the user profile used for dynamic SQL statements.
*USER: Local dynamic SQL statements are run under the user profile of the
job. Distributed dynamic SQL statements are run under the user profile of the
application server job.
*OWNER: Local dynamic SQL statements are run under the user profile of the
program’s owner. Distributed dynamic SQL statements are run under the user
profile of the SQL package’s owner.
| source-file-name: Specify the name of the source file to contain the output
| source member.
TEXT
Specifies text that briefly describes the program and its function. More
information on this parameter is in Appendix A, ″Expanded Parameter
Descriptions″ in the CL Reference book.
*SRCMBRTXT: The text is taken from the source file member being used to
create the RPG program. Text for a database source member can be added or
changed by using the Start Source Entry Utility (STRSEU) command, or by
using either the Add Physical File Member (ADDPFM) command or the Change
Physical File Member (CHGPFM) command. If the source file is an inline file or
a device file, the text is blank.
Example
CRTSQLRPG PGM(JONES/ARBR5)
TEXT('Accounts Receivable Branch 5')
This command runs the SQL precompiler which precompiles the source and stores
the changed source in member ARBR5 in file QSQLTEMP in library QTEMP. The
RPG compiler is called to create program ARBR5 in library JONES by using the
source member created by the SQL precompiler.
*CURLIB/
ÊÊ CRTSQLRPGI OBJ( object-name ) Ê
library-name/
(1)
Ê Ê
*OBJ
SRCMBR( source-file-member-name )
Ê Ê
OPTION( OPTION Details ) *CURRENT
TGTRLS( *PRV )
VxRxMx
Ê Ê
*PGM
OBJTYPE( *MODULE )
*SRVPGM
Ê Ê
*LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
Ê Ê
*UR *ENDACTGRP
*CHG CLOSQLCSR( *ENDMOD )
COMMIT( *ALL )
*RS
*CS
*NONE
*NC
*RR
Ê Ê
*OPTIMIZE *ALLREAD
ALWCPYDTA( *YES ) ALWBLK( *NONE )
*NO *READ
Ê Ê
*NO 10
DLYPRP( *YES ) GENLVL( severity-level )
Ê Ê
*HMS *JOB
TIMFMT( *USA ) TIMSEP( ':' )
*ISO '.'
*EUR ','
*JIS ' '
*BLANK
Ê Ê
*YES *LOCAL
REPLACE( *NO ) RDB( relational-database-name )
*NONE
Ê Ê
*CURRENT *NONE
USER( user-name ) PASSWORD( password )
Ê Ê
*DUW *NONE
RDBCNNMTH( *RUW ) DFTRDBCOL( collection-name )
Ê Ê
*NO
DYNDFTCOL( *YES )
Ê Ê
*OBJLIB/ *OBJ
SQLPKG( package-name )
library-name/
Ê Ê
*NAMING
SQLPATH( *LIBL )
· collection-name
Ê Ê
*NONE *NAMING
DBGVIEW( *SOURCE ) USRPRF( *OWNER )
*USER
Ê Ê
*USER
DYNUSRPRF( *OWNER )
Ê Ê
*JOB
SRTSEQ( *JOBRUN )
*LANGIDUNQ
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
Ê Ê
*JOB *NONE
LANGID( *JOBRUN ) OUTPUT( *PRINT )
language-identifier
Ê Ê
*LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
Ê Ê
QTEMP/ QSQLTEMP1
TOSRCFILE( source-file-name )
*LIBL/
*CURLIB/
library-name/
Ê ÊÍ
*SRCMBRTXT
TEXT( *BLANK )
'description'
OPTION Details:
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Create Structured Query Language ILE RPG Object (CRTSQLRPGI) command
calls the Structured Query Language (SQL) precompiler which precompiles RPG
source containing SQL statements, produces a temporary source member, and then
optionally calls the ILE RPG compiler to create a module, create a program, or
create a service program.
Parameters
OBJ
Specifies the qualified name of the object being created.
*CURLIB: The new object is created in the current library for the job. If no
library is specified as the current library for the job, the QGPL library is
used.
library-name: Specify the name of the library where the object is created.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
QRPGLESRC: If the source file name is not specified, the IBM-supplied source
file QRPGLESRC contains the RPG source.
source-file-name: Specify the name of the source file that contains the RPG
source.
SRCMBR
Specifies the name of the source file member that contains the RPG source.
This parameter is specified only if the source file name in the SRCFILE
*OBJ: Specifies that the RPG source is in the member of the source file that
has the same name as that specified on the OBJ parameter.
*GEN: The precompiler creates the object that is specified by the OBJTYPE
parameter.
*NOGEN: The precompiler does not call the RPG compiler, and a module,
program, service program, or SQL package is not created.
*JOB: The value used as the decimal point for numeric constants in SQL is the
representation of decimal point specified for the job at precompile time.
*SYSVAL: The value used as the decimal point for numeric constants in SQL
statements is the QDECFMT system value.
Note: If QDECFMT specifies that the value used as the decimal point is a
comma(,), any numeric constants in lists (such as in the SELECT clause
or the VALUES clause) must be separated by a comma (,) followed by a
blank ( ). For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) in which the decimal point is a period (.).
*PERIOD: The value used as the decimal point for numeric constants in SQL
statements is a period (.).
*COMMA: The value used as the decimal point for numeric constants in SQL
statements is a comma (,).
Note: Any numeric constants in lists (such as in the SELECT clause or the
VALUES clause) must be separated by a comma (,) followed by a blank(
). For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) where the decimal point is a period (.).
*SECLVL: Second-level text with replacement data is added for all messages
on the listing.
*NOSEQSRC: The source file member created into QSQLTEMP1 has the same
sequence numbers as the original source read by the precompiler.
*NOEVENTF: The compiler will not produce an Event File for use by
CoOperative Development Environment/400 (CODE/400).
*NOCVTDT: Date, time and timestamp data types which are retrieved from
externally-described files are to be processed using the native RPG language.
*CVTDT: Date, time and timestamp data types which are retrieved from
externally-described files are to be processed as fixed-length character.
*OPTLOB: The first FETCH for a cursor determines how the cursor will be used
for LOBs (Large Objects) on all subsequent FETCHes. This option remains in
effect until the cursor is closed.
If the first FETCH uses a LOB locator to access a LOB column, no subsequent
FETCH for that cursor can fetch that LOB column into a LOB host variable.
If the first FETCH places the LOB column into a LOB host variable, no
subsequent FETCH for that cursor can use a LOB locator for that column.
In the examples given for the *CURRENT and *PRV values, and when
specifying the release-level value, the format VxRxMx is used to specify the
release, where Vx is the version, Rx is the release, and Mx is the modification
level. For example, V2R3M0 is version 2, release 3, modification level 0.
release-level: Specify the release in the format VxRxMx. The object can be
used on a system with the specified release or with any subsequent release of
the operating system installed.
Valid values depend on the current version, release, and modification level, and
they change with each new release. If you specify a release-level which is
earlier than the earliest release level supported by this command, an error
message is sent indicating the earliest supported release.
OBJTYPE
Specifies the type of object being created.
*PGM: The SQL precompiler issues the CRTBNDRPG command to create the
bound program.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
source-file-name: Specify the name of the source file that contains the source
file members specified on any SQL INCLUDE statement. The record length of
the source file specified here must be no less than the record length of the
source file specified on the SRCFILE parameter.
COMMIT
Specifies whether SQL statements in the compiled unit are run under
commitment control. Files referred to in the host language source are not
affected by this option. Only SQL tables, SQL views, and SQL packages
referred to in SQL statements are affected.
*CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows updated, deleted, and inserted are locked until the end of the unit of
work (transaction). A row that is selected, but not updated, is locked until the
next row is selected. Uncommitted changes in other jobs cannot be seen.
*RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows selected, updated, deleted, and inserted are locked until the end of the
unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
are locked exclusively until the end of the unit of work (transaction).
*ENDACTGRP: SQL cursors are closed, SQL prepared statements are implicitly
discarded, and LOCK TABLE locks are released when the activation group
ends.
*ENDMOD: SQL cursors are closed and SQL prepared statements are implicitly
discarded when the module is exited. LOCK TABLE locks are released when
the first SQL program on the call stack ends.
| ALWCPYDTA
| Specifies whether a copy of the data can be used in a SELECT statement.
| *OPTIMIZE: The system determines whether to use the data retrieved directly
| from the database or to use a copy of the data. The decision is based on which
| method provides the best performance. If COMMIT is *CHG or *CS and
| ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
| data is used only when it is necessary to run a query.
| *NO: A copy of the data is not allowed. If a temporary copy of the data is
| required to perform the query, an error message is returned.
| ALWBLK
| Specifies whether the database manager can use record blocking, and the
| extent to which blocking can be used for read-only cursors.
| Specifying *ALLREAD:
| v Allows record blocking under commitment control level *CHG in addition to
| the blocking allowed for *READ.
| v Can improve the performance of almost all read-only cursors in programs,
| but limits queries in the following ways:
| – The Rollback (ROLLBACK) command, a ROLLBACK statement in host
| languages, or the ROLLBACK HOLD SQL statement does not reposition a
| read-only cursor when *ALLREAD is specified.
| – Dynamic running of a positioned UPDATE or DELETE statement (for
| example, using EXECUTE IMMEDIATE), cannot be used to update a row
| in a cursor unless the DECLARE statement for the cursor includes the
| FOR UPDATE clause.
| *NONE: Rows are not blocked for retrieval of data for cursors.
| Specifying *NONE:
| v Guarantees that the data retrieved is current.
| *READ: Records are blocked for read-only retrieval of data for cursors when:
| v *NONE is specified on the COMMIT parameter, which indicates that
| commitment control is not used.
| v The cursor is declared with a FOR READ ONLY clause or there are no
| dynamic statements that could run a positioned UPDATE or DELETE
| statement for the cursor.
| Specifying *READ can improve the overall performance of queries that meet the
| above conditions and retrieve a large number of records.
DLYPRP
Specifies whether the dynamic statement validation for a PREPARE statement
is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
validation improves performance by eliminating redundant validation.
Note: If you specify *YES, performance is not improved if the INTO clause is
used on the PREPARE statement or if a DESCRIBE statement uses the
dynamic statement before an OPEN is issued for the statement.
GENLVL
Specifies the severity level at which the create operation fails. If errors occur
that have a severity level greater than this value, the operation ends.
*JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
command to determine the current date format for the job.
Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
specified on the DATFMT parameter.
*JOB: The date separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
Note: An input time string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
Note: This parameter applies only when *HMS is specified on the TIMFMT
parameter.
*JOB: The time separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
*NO: A new SQL module, program, service program, or package is not created
if an SQL object of the same name and type already exists in the specified
library.
RDB
Specifies the name of the relational database where the SQL package object is
created.
*NONE: An SQL package object is not created. The program object is not a
distributed program and the Create Structured Query Language Package
(CRTSQLPKG) command cannot be used.
USER
Specifies the user name sent to the remote system when starting the
conversation. This parameter is valid only when RDB is specified.
*CURRENT: The user profile under which the current job is running is used.
user-name: Specify the user name being used for the application server job.
PASSWORD
Specifies the password to be used on the remote system. This parameter is
valid only if RDB is specified.
password: Specify the password of the user name specified on the USER
parameter.
RDBCNNMTH
Specifies the semantics used for CONNECT statements. Refer to the SQL
Reference, SC41-3612 book for more information.
*RUW: CONNECT (Type 1) semantics are used to support remote unit of work.
Consecutive CONNECT statements result in the previous connection being
disconnected before a new connection is established.
DFTRDBCOL
Specifies the collection name used for the unqualified names of tables, views,
indexes, and SQL packages. This parameter applies only to static SQL
statements.
collection-name: Specify the name of the collection identifier. This value is used
instead of the naming convention specified on the OPTION parameter.
| DYNDFTCOL
| Specifies whether the default collection name specified for the DFTRDBCOL
| parameter is also used for dynamic statements.
*OBJ: The name of the SQL package is the same as the object name specified
on the OBJ parameter.
package-name: Specify the name of the SQL package. If the remote system is
not an AS/400 system, no more than 8 characters can be specified.
| SQLPATH
| Specifies the path to be used to find procedures, functions, and user defined
| types in static SQL statements.
| *NAMING: The path used depends on the naming convention specified on the
| OPTION parameter.
| For *SYS naming, the path used is *LIBL, the current library list at runtime.
| For *SQL naming, the path used is ″QSYS″, ″QSYS2″, ″userid″, where ″userid″
| is the value of the USER special register. If a collection-name is specified on
| the DFTRDBCOL parameter, the collection-name takes the place of userid.
*NOFLAG: The precompiler does not check to see whether SQL statements
conform to IBM SQL syntax.
*NONE: The precompiler does not check to see whether SQL statements
conform to ANSI standards.
*SOURCE: The SQL precompiler will provide the source views for the root and
if necessary, SQL INCLUDE statements. A view will be provided which contains
the statements generated by the precompiler.
USRPRF
Specifies the user profile that is used when the compiled program object is run,
including the authority that the program object has for each object in static SQL
statements. The profile of either the program owner or the program user is used
to control which objects can be used by the program object.
*USER: The profile of the user running the program object is used.
*OWNER: The user profiles of both the program owner and the program user
are used when the program is run.
DYNUSRPRF
Specifies the user profile to be used for dynamic SQL statements.
*USER: For local, dynamic SQL statements run under the user of the program’s
user. For distributed, dynamic SQL statements run under the profile of the SQL
package’s user.
*OWNER: For local, dynamic SQL statements run under the profile of the
program’s owner. For distributed, dynamic SQL statements run under the profile
of the SQL package’s owner.
SRTSEQ
Specifies the sort sequence table to be used for string comparisons in SQL
statements.
*JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
For distributed applications, SRTSEQ(*JOBRUN) is valid only when
LANGID(*JOBRUN) is also specified.
*LANGIDUNQ: The unique-weight sort table for the language specified on the
LANGID parameter is used.
*LANGIDSHR: The sort sequence table uses the same weight for multiple
characters, and is the shared-weight sort sequence table associated with the
language specified on the LANGID parameter.
*HEX: A sort sequence table is not used. The hexadecimal values of the
characters are used to determine the sort sequence.
The name of the sort sequence table can be qualified by one of the following
library values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
table-name: Specify the name of the sort sequence table to be used.
LANGID
Specifies the language identifier to be used when SRTSEQ(*LANGIDUNQ) or
SRTSEQ(*LANGIDSHR) is specified.
*JOB: The LANGID value for the job is retrieved during the precompile.
*JOBRUN: The LANGID value for the job is retrieved when the program is run.
For distributed applications, LANGID(*JOBRUN) is valid only when
SRTSEQ(*JOBRUN) is also specified.
The name of the printer file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
printer-file-name: Specify the name of the printer device file to which the
precompiler printout is directed.
| TOSRCFILE
| Specifies the qualified name of the source file that is to contain the output
| source member that has been processed by the SQL precompiler. If the
| specified source file is not found, it will be created. The output member will
| have the same name as the name that is specified for the SRCMBR parameter.
| source-file-name: Specify the name of the source file to contain the output
| source member.
TEXT
Specifies the text that briefly describes the function. More information on this
parameter is located in Appendix A, ″Expanded Parameter Descriptions″ in the
CL Reference book.
*SRCMBRTXT: The text is taken from the source file member being used to
create the RPG program. Text can be added or changed for a database source
member by using the Start Source Entry Utility (STRSEU) command, or by
using either the Add Physical File Member (ADDPFM) or Change Physical File
Member (CHGPFM) command. If the source file is an inline file or a device file,
the text is blank.
Example
CRTSQLRPGI PAYROLL OBJTYPE(*PGM) TEXT('Payroll Program')
This command runs the SQL precompiler which precompiles the source and stores
the changed source in member PAYROLL in file QSQLTEMP1 in library QTEMP.
The ILE RPG compiler is called to create program PAYROLL in the current library
by using the source member created by the SQL precompiler.
*LIBL/
ÊÊ CRTSQLPKG PGM( program-name ) Ê
*CURLIB/
library-name/
(1)
Ê Ê
*PGM
RDB( relational-database-name )
Ê Ê
*CURRENT *NONE
USER( user-name ) PASSWORD( password )
Ê Ê
10 *YES
GENLVL( severity-level ) REPLACE( *NO )
Ê Ê
*PGM
DFTRDBCOL( *NONE )
collection-name
Ê Ê
*LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
Ê Ê
*PGM
OBJTYPE( *SRVPGM )
Ê Ê
*ALL
(2)
MODULE( · module-name )
Ê ÊÍ
*PGMTXT
TEXT( *BLANK )
'description'
Purpose
The Create Structured Query Language Package (CRTSQLPKG) command is used
to create (or re-create) an SQL package on a relational database from an existing
distributed SQL program. A distributed SQL program is a program created by
specifying the RDB parameter on a CRTSQLxxx (where xxx = C, CI, CBL, CBLI,
FTN, PLI, or RPG or RPGI) command.
Parameters
PGM
Specifies the qualified name of the program for which the SQL package is being
created. The program must be a distributed SQL program.
The name of the program can be qualified by one of the following library values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
program-name: Specify the name of the program for which the package is being
created.
RDB
Specifies the name of the relational database where the SQL package is being
created.
*PGM: The relational database name specified for the SQL program is used.
The relational database name is specified on the RDB parameter of the
distributed SQL program.
*CURRENT: The user name associated with the current job is used.
user-name: Specify the user name being used for the application server job.
PASSWORD
Specifies the password to be used on the remote system.
severity-level: Specify the maximum severity level. Valid values range from 0
through 40.
REPLACE
Specifies whether an existing package is being replaced with the new package.
More information on this parameter is in Appendix A, ″Expanded Parameter
Descriptions″ in the CL Reference book.
*YES: An existing SQL package of the same name is replaced by the new SQL
package.
*NO: An existing SQL package of the same name is not replaced; a new SQL
package is not created if the package already exists in the specified library.
DFTRDBCOL
Specifies the collection name to be used for unqualified names of tables, views,
indexes, and SQL packages. This parameter applies only to static SQL
statements in the package.
*PGM: The collection name specified for the SQL program is used. The default
relational database collection name is specified on the DFTRDBCOL parameter
of the distributed SQL program.
*NONE: Unqualified names for tables, views, indexes, and SQL packages use
the search conventions specified on the OPTION parameter of the CRTSQLxxx
command used to create the program.
collection-name: Specify the collection name that is used for unqualified tables,
views, indexes, and SQL packages.
PRTFILE
Specifies the qualified name of the printer device file to which the create SQL
package error listing is directed. If no errors are detected during the creation of
the SQL package, no listing is produced.
The name of the printer file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
QSYSPRT: If a file name is not specified, the create SQL package error listing
is directed to the IBM-supplied printer file QSYSPRT.
*PGM: Create an SQL package from the program specified on the PGM
parameter.
*SRVPGM: Create an SQL package from the service program specified on the
PGM parameter.
MODULE
Specifies a list of modules in a bound program.
*ALL: An SQL package is created for each module in the program. An error
message is sent if none of the modules in the program contain SQL statements
or none of the modules is a distributed module.
Note: CRTSQLPKG can process programs that do not contain more than 1024
modules.
Duplicate module names in the same program are allowed. This command
looks at each module in the program and if *ALL or the module name is
specified on the MODULE parameter, processing continues to determine
whether an SQL package should be created. If the module is created using
SQL and the RDB parameter is specified on the precompile command, an SQL
package is created for the module. The SQL package is associated with the
module of the bound program.
TEXT
Specifies text that briefly describes the SQL package and its function.
*PGMTXT: The text from the program for which the SQL package is being
created is used.
Example
CRTSQLPKG PAYROLL RDB(SYSTEMA)
TEXT('Payroll Program')
This command creates an SQL package from the distributed SQL program
| PAYROLL on relational database SYSTEMA.
| *LIBL/
| ÊÊ CVTSQLCPP SRCFILE( source-file-name ) Ê
*CURLIB/
library-name/
| *OBJ (1)
| Ê SRCMBR( source-file-member-name ) Ê
| Ê Ê
| *LIBL/ QSQLTEMP
TOSRCFILE( source-file-name )
*CURLIB/
library-name/
| Ê Ê
| OPTION( OPTION Details ) *CURRENT
TGTRLS( VxRxMx )
| Ê Ê
| *LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
| Ê Ê
| *UR *ENDACTGRP
*CHG CLOSQLCSR( *ENDMOD )
COMMIT( *ALL )
*RS
*CS
*NONE
*NC
*RR
| Ê Ê
| *OPTIMIZE *ALLREAD
ALWCPYDTA( *YES ) ALWBLK( *NONE )
*NO *READ
| Ê Ê
| *NO 10
DLYPRP( *YES ) GENLVL( severity-level )
| Ê Ê
| *JOB *HMS
DATSEP( '/' ) TIMFMT( *USA )
'.' *ISO
',' *EUR
'-' *JIS
' '
*BLANK
| Ê Ê
| *JOB
TIMSEP( ':' )
'.'
','
' '
*BLANK
|
| Ê Ê
| *LOCAL *CURRENT
RDB( relational-database-name ) USER( user-name )
*NONE
| Ê Ê
| *NONE *DUW
PASSWORD( password ) RDBCNNMTH( *RUW )
| Ê Ê
| *NONE *NO
DFTRDBCOL( collection-name ) DYNDFTCOL( *YES )
| Ê Ê
| *OBJLIB/ *OBJ
SQLPKG( package-name )
library-name/
· collection-name
| Ê Ê
| *NOFLAG *NONE
SAAFLAG( *FLAG ) FLAGSTD( *ANS )
| Ê Ê
| *NONE *NAMING
DBGVIEW( *SOURCE ) USRPRF( *OWNER )
*USER
| Ê Ê
| *USER
DYNUSRPRF( *OWNER )
| Ê Ê
| *JOB
SRTSEQ( *JOBRUN )
*LANGIDUNQ
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
| Ê Ê
| *JOB *NONE
LANGID( *JOBRUN ) OUTPUT( *PRINT )
language-identifier
| Ê Ê
| *LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
| Ê ÊÍ
| *SRCMBRTXT
TEXT( *BLANK )
'description'
| OPTION Details:
| *NOEVENTF *OPTLOB
| Ê
*EVENTF *NOOPTLOB
|
| Notes:
| 1. All parameters preceding this point can be specified in positional form.
| Purpose
| The Convert Structured Query Language C++ Source (CVTSQLCPP) command
| calls the Structured Query Language (SQL) precompiler. The precompiler
| precompiles C++ source that contains SQL statements, and produces a temporary
| source member. This source member can then be provided as input to the
| VisualAge C++ for OS/400 compiler.
| Parameters
| SRCFILE
| Specifies the qualified name of the source file that contains the C++ source with
| SQL statements.
| One of the following library values can qualify the name of the source file:
| *LIBL: All libraries in the job’s library list are searched until the first match is
| found.
| *CURLIB: The current library for the job is searched. If no library is specified
| as the current library for the job, the QGPL library is used.
| library-name: Specify the name of the library to be searched.
| source-file-name: Specify the name of the source file that contains the C++
| source with SQL statements.
| SRCMBR
| Specifies the name of the source file member that contains the C++ source.
| TOSRCFILE
| Specifies the qualified name of the source file that is to contain the output C++
| source member that has been processed by the SQL C++ precompiler. If the
| specified source file is not found, it will be created. The output member will
| have the same name as the name specified for the SRCMBR parameter.
| The name of the source file can be qualified by one of the following library
| values:
| *LIBL: The job’s library list is searched for the specified file. If the file is not
| found in any library in the library list, the file will be created in the current
| library.
| *CURLIB: The current library for the job will be used. If no library is
| specified as the current library for the job, the QGPL library is used.
| *JOB: The value used as the decimal point for numeric constants in SQL is the
| representation of decimal point specified for the job at precompile time.
| Note: If the job decimal point value specifies that the value used as the
| decimal point is a comma, any numeric constants in lists (such as in the
| SELECT clause or the VALUES clause) must be separated by a comma
| followed by a blank. For example, VALUES(1,1, 2,23, 4,1) is equivalent
| to VALUES(1.1,2.23,4.1) in which the decimal point is a period.
| *PERIOD:The value used as the decimal point for numeric constants in SQL
| statements is a period.
| *COMMA: The value used as the decimal point for numeric constants in SQL
| statements is a comma.
| Note: Any numeric constants in lists (such as in the SELECT clause or the
| VALUES clause) must be separated by a comma followed by a blank.
| For example, VALUES(1,1, 2,23, 4,1) is equivalent to
| VALUES(1.1,2.23,4.1) where the decimal point is a period.
| *SECLVL: Second-level text with replacement data is added for all messages
| on the listing.
| *CNULRQD: Output character and graphic host variables always contain the
| NUL-terminator. If there is not enough space for the NUL-terminator, the data is
| truncated and the NUL-terminator is added. Input character and graphic host
| variables require a NUL-terminator.
| *NOEVENTF: The compiler will not produce an event file for use by
| CoOperative Development Environment/400 (CODE/400).
| *OPTLOB: The first FETCH for a cursor determines how the cursor will be used
| for LOBs (Large Objects) on all subsequent FETCHes. This option remains in
| effect until the cursor is closed.
| If the first FETCH uses a LOB locator to access a LOB column, no subsequent
| FETCH for that cursor can fetch that LOB column into a LOB host variable.
| If the first FETCH places the LOB column into a LOB host variable, no
| subsequent FETCH for that cursor can use a LOB locator for that column.
| In the examples given for the *CURRENT and *PRV values, and when
| specifying the release-level value, the format VxRxMx is used to specify the
| release, where Vx is the version, Rx is the release, and Mx is the modification
| level. For example, V2R3M0 is version 2, release 3, modification level 0.
| Valid values depend on the current version, release, and modification level, and
| they change with each new release. If you specify a release-level which is
| earlier than the earliest release level supported by this command, an error
| message is sent indicating the earliest supported release.
| INCFILE
| Specifies the qualified name of the source file that contains members included
| in the program with any SQL INCLUDE statement.
| The name of the source file can be qualified by one of the following library
| values:
| *LIBL: All libraries in the job’s library list are searched until the first match is
| found.
| *CURLIB: The current library for the job is searched. If no library is specified
| as the current library for the job, the QGPL library is used.
| library-name: Specify the name of the library to be searched.
| source-file-name: Specify the name of the source file that contains the source
| file members specified on any SQL INCLUDE statement. The record length of
| the source file specified here must be no less than the record length of the
| source file specified on the SRCFILE parameter.
| COMMIT
| Specifies whether SQL statements in the compiled unit are run under
| commitment control. Files referred to in the host language source are not
| affected by this option. Only SQL tables, SQL views, and SQL packages
| referred to in SQL statements are affected.
| *CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
| CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
| the rows updated, deleted, and inserted are locked until the end of the unit of
| work (transaction). A row that is selected, but not updated, is locked until the
| next row is selected. Uncommitted changes in other jobs cannot be seen.
| *RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
| CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
| the rows selected, updated, deleted, and inserted are locked until the end of the
| unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
| All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
| are locked exclusively until the end of the unit of work (transaction).
| CLOSQLCSR
| Specifies when SQL cursors are implicitly closed, SQL prepared statements are
| implicitly discarded, and LOCK TABLE locks are released. SQL cursors are
| explicitly closed when you issue the CLOSE, COMMIT, or ROLLBACK (without
| HOLD) SQL statements.
| *ENDACTGRP: SQL cursors are closed, SQL prepared statements are implicitly
| discarded, and LOCK TABLE locks are released when the activation group
| ends.
| *ENDMOD: SQL cursors are closed and SQL prepared statements are implicitly
| discarded when the module is exited. LOCK TABLE locks are released when
| the first SQL program on the call stack ends.
| ALWCPYDTA
| Specifies whether a copy of the data can be used in a SELECT statement.
| *OPTIMIZE: The system determines whether to use the data retrieved directly
| from the database or to use a copy of the data. The decision is based on which
| method provides the best performance. If COMMIT is *CHG or *CS and
| ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
| data is used only when it is necessary to run a query.
| *NO: A copy of the data is not allowed. If a temporary copy of the data is
| required to perform the query, an error message is returned.
| ALWBLK
| Specifies whether the database manager can use record blocking, and the
| extent to which blocking can be used for read-only cursors.
| Specifying *ALLREAD:
| v Allows record blocking under commitment control level *CHG in addition to
| the blocking allowed for *READ.
| v Can improve the performance of almost all read-only cursors in programs,
| but limits queries in the following ways:
| – The Rollback (ROLLBACK) command, a ROLLBACK statement in host
| languages, or the ROLLBACK HOLD SQL statement does not reposition a
| read-only cursor when *ALLREAD is specified.
| *NONE: Rows are not blocked for retrieval of data for cursors.
| Specifying *NONE:
| v Guarantees that the data retrieved is current.
| v May reduce the amount of time required to retrieve the first row of data for a
| query.
| v Stops the database manager from retrieving a block of data rows that is not
| used by the program when only the first few rows of a query are retrieved
| before the query is closed.
| v Can degrade the overall performance of a query that retrieves a large
| number of rows.
| *READ: Records are blocked for read-only retrieval of data for cursors when:
| v *NONE is specified on the COMMIT parameter, which indicates that
| commitment control is not used.
| v The cursor is declared with a FOR READ ONLY clause or there are no
| dynamic statements that could run a positioned UPDATE or DELETE
| statement for the cursor.
| Specifying *READ can improve the overall performance of queries that meet the
| above conditions and retrieve a large number of records.
| DLYPRP
| Specifies whether the dynamic statement validation for a PREPARE statement
| is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
| validation improves performance by eliminating redundant validation.
| Note: If you specify *YES, performance is not improved if the INTO clause is
| used on the PREPARE statement or if a DESCRIBE statement uses the
| dynamic statement before an OPEN is issued for the statement.
| GENLVL
| Specifies the severity level at which the create operation fails. If errors occur
| that have a severity level greater than this value, the operation ends.
| *SRCFILE: The file member margin values specified by the user on the
| SRCMBR parameter are used. The margin default values are 1 and 80.
| left: Specify the beginning position for the statements. Valid values range from 1
| through 80.
| right: Specify the ending position for the statements. Valid values range from 1
| through 80.
| DATFMT
| Specifies the format used when accessing date result columns. All output date
| fields are returned in the specified format. For input date strings, the specified
| value is used to determine whether the date is specified in a valid format.
| Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
| always valid.
| *JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
| command to determine the current date format for the job.
| Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
| specified on the DATFMT parameter.
| ’ ’: A blank ( ) is used.
| Note: An input time string that uses the format *USA, *ISO, *EUR, or *JIS is
| always valid.
| Note: This parameter applies only when *HMS is specified on the TIMFMT
| parameter.
| *JOB: The time separator specified for the job at precompile time is used. Use
| the Display Job (DSPJOB) command to determine the current value for the job.
| ’ ’: A blank ( ) is used.
| *NONE: An SQL package object is not created. The program object is not a
| distributed program and the Create Structured Query Language Package
| (CRTSQLPKG) command cannot be used.
| USER
| Specifies the user name sent to the remote system when starting the
| conversation. This parameter is valid only when RDB is specified.
| *CURRENT: The user profile under which the current job is running is used.
| user-name: Specify the user name being used for the application server job.
| PASSWORD
| Specifies the password to be used on the remote system. This parameter is
| valid only if RDB is specified.
| password: Specify the password of the user name specified on the USER
| parameter.
| RDBCNNMTH
| Specifies the semantics used for CONNECT statements. Refer to the SQL
| Reference, SC41-3612 book for more information.
| *RUW: CONNECT (Type 1) semantics are used to support remote unit of work.
| Consecutive CONNECT statements result in the previous connection being
| disconnected before a new connection is established.
| DFTRDBCOL
| Specifies the collection name used for the unqualified names of tables, views,
| indexes, and SQL packages. This parameter applies only to static SQL
| statements.
| *NO: Do not use the value specified on the DFTRDBCOL parameter for
| unqualified names of tables, views, indexes, and SQL packages for dynamic
| SQL statements. The naming convention specified on the OPTION parameter is
| used.
| *OBJ: The name of the SQL package is the same as the object name specified
| on the OBJ parameter.
| package-name: Specify the name of the SQL package. If the remote system is
| not an AS/400 system, no more than 8 characters can be specified.
| SQLPATH
| Specifies the path to be used to find procedures, functions, and user defined
| types in static SQL statements.
| *NAMING: The path used depends on the naming convention specified on the
| OPTION parameter.
| For *SYS naming, the path used is *LIBL, the current library list at runtime.
| For *SQL naming, the path used is ″QSYS″, ″QSYS2″, ″userid″, where ″userid″
| is the value of the USER special register. If a collection-name is specified on
| the DFTRDBCOL parameter, the collection-name takes the place of userid.
| *NOFLAG: The precompiler does not check to see whether SQL statements
| conform to IBM SQL syntax.
| *NONE: The precompiler does not check to see whether SQL statements
| conform to ANSI standards.
| *SOURCE: The SQL precompiler provides the source views for the root and if
| necessary, SQL INCLUDE statements. A view is provided that contains the
| statements generated by the precompiler.
| USRPRF
| Specifies the user profile that is used when the compiled program object is run,
| including the authority that the program object has for each object in static SQL
| statements. The profile of either the program owner or the program user is used
| to control which objects can be used by the program object.
| *USER: The profile of the user running the program object is used.
| *OWNER: The user profiles of both the program owner and the program user
| are used when the program is run.
| DYNUSRPRF
| Specifies the user profile to be used for dynamic SQL statements.
| *USER: Local dynamic SQL statements are run under the profile of the
| program’s user. Distributed dynamic SQL statements are run under the profile
| of the SQL package’s user.
| *OWNER: Local dynamic SQL statements are run under the profile of the
| program’s owner. Distributed dynamic SQL statements are run under the profile
| of the SQL package’s owner.
| SRTSEQ
| Specifies the sort sequence table to be used for string comparisons in SQL
| statements.
| *JOB: The SRTSEQ value for the job is retrieved during the precompile.
| *JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
| For distributed applications, SRTSEQ(*JOBRUN) is valid only when
| LANGID(*JOBRUN) is also specified.
| *HEX: A sort sequence table is not used. The hexadecimal values of the
| characters are used to determine the sort sequence.
| *LANGIDSHR: The sort sequence table uses the same weight for multiple
| characters, and is the shared-weight sort sequence table associated with the
| language specified on the LANGID parameter.
| *LANGIDUNQ: The unique-weight sort table for the language specified on the
| LANGID parameter is used.
| The name of the table name can be qualified by one of the following library
| values:
| *LIBL: All libraries in the job’s library list are searched until the first match is
| found.
| *CURLIB: The current library for the job is searched. If no library is specified
| as the current library for the job, the QGPL library is used.
| library-name: Specify the name of hte library to be searched.
| *JOB: The LANGID value for the job is retrieved during the precompile.
| *JOBRUN: The LANGID value for the job is retrieved when the program is run.
| For distributed applications, LANGID(*JOBRUN) is valid only when
| SRTSEQ(*JOBRUN) is also specified.
| The name of the printer file can be qualified by one of the following library
| values:
| printer-file-name: Specify the name of the printer device file to which the
| precompiler printout is directed.
| TEXT
| Specifies the text that briefly describes the program and the function. More
| information on this parameter is in Appendix A, ″Expanded Parameter
| Descriptions″ in the CL Reference book.
| *SRCMBRTXT: The text is taken from the source file member being used as
| the text for the output source member. Text can be added or changed for a
| database source member by using the Start Source Entry Utility (STRSEU)
| command, or by using either the Add Physical File Member (ADDPFM)
| command or the Change Physical File Member (CHGPFM) command. If the
| source file is an inline file or a device file, the text is blank.
| Example
| CVTSQLCPP SRCFILE(PAYROLL) SRCMBR(PAYROLL)
| TOSRCFILE(MYLIB/MYSRCFILE) TEXT('Payroll Program')
| This command runs the SQL precompiler which precompiles the source and stores
| the changed source in member PAYROLL in file MYSRCFILE in library MYLIB. No
| module or program object is created.
ÊÊ DLTSQLPKG Ê
*LIBL/ (1)
Ê SQLPKG( SQL-package-name ) ÊÍ
*CURLIB/ generic*-SQL-package name
*USRLIBL/
*ALL/
*ALLUSR/
library-name/
Notes:
1. All parameters preceding this point can be specified in positional form.
DLTSQLPKG is a local command and must be used on the AS/400 system where
the SQL package being deleted is located.
To delete an SQL package on a remote system that is also an AS/400 system, use
the Submit Remote Command (SBMRMTCMD) command to run the DLTSQLPKG
command on the remote system.
The user can do the following to delete an SQL package from a remote system that
is not an AS/400 system:
v Use interactive SQL to run the CONNECT and DROP PACKAGE operations.
v Sign on the remote system and use a command local to that system.
v Create and run an SQL program that contains a DROP PACKAGE SQL
statement.
Parameters
SQLPKG
Specifies the qualified name of the SQL package being deleted. A specific or
generic SQL package name can be specified.
The name of the SQL Package can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified as
the current library for the job, the QGPL library is used.
*USRLIBL: Only the libraries in the user portion of the job’s library list are
searched.
*ALLUSR: All user libraries are searched. All libraries with names that do not
begin with the letter Q are searched except for the following:
#CGULIB #DFULIB #RPGLIB #SEULIB
#COBLIB #DSULIB #SDALIB
Although the following Qxxx libraries are provided by IBM, they typically contain
user data that changes frequently. Therefore, these libraries are considered
user libraries and are also searched:
QDSNX QRCL QUSRBRM QUSRSYS
QGPL QS36F QUSRIJS QUSRVxRxMx
QGPL38 QUSER38 QUSRINFSKR
QPFRDATA QUSRADSM QUSRRDARS
Example
DLTSQLPKG SQLPKG(JONES)
*LIBL/
ÊÊ PRTSQLINF OBJ( object-name ) Ê
*CURLIB/
library-name/
(1)
Ê ÊÍ
*PGM
OBJTYPE( *SQLPKG )
*SRVPGM
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Print Structured Query Language Information (PRTSQLINF) command prints
information about the embedded SQL statements in a program, SQL package, or
service program. The information includes the SQL statements, the access plans
used during the running of the statement, and a list of the command parameters
used to precompile the source member for the object.
Parameters
OBJ
Specifies the name of the program or SQL package for which you want SQL
information printed.
The name of the object can be qualified by one of the following library values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
object-name: Specify the name of the program or SQL package for which you
want information printed.
OBJTYPE
Specifies the type of object.
Example
Example 1: Printing SQL Information
PRTSQLINF PAYROLL
This command will print information about the SQL statements contained in program
PAYROLL.
*LIBL/
ÊÊ RUNSQLSTM SRCFILE ( source-file-name ) Ê
*CURLIB/
library-name/
(1)
Ê SRCMBR ( source-file-member-name ) Ê
*UR
*CHG
COMMIT ( *ALL )
*RS
*CS
*NONE
*NC
*RR
Ê Ê
*SYS *RUN
NAMING ( *SQL ) PROCESS( *SYN )
Ê Ê
*OPTIMIZE *ALLREAD
ALWCPYDTA ( *YES ) ALWBLK ( *NONE )
*NO *READ
Ê Ê
*JOB *HMS
DATSEP ( '/' ) TIMFMT ( *USA )
'.' *ISO
',' *EUR
'-' *JIS
' '
*BLANK
Ê Ê
*JOB *SYSVAL
TIMSEP ( ':' ) *JOB
'.' DECMPT ( *PERIOD )
',' *COMMA
' '
*BLANK
Ê Ê
*JOB
SRTSEQ ( *LANGIDUNQ )
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
Ê Ê
*JOB
LANGID ( language-identifier )
Ê Ê
*NONE *NONE
DFTRDBCOL ( collection-name ) FLAGSTD ( *ANS )
Ê Ê
*NOFLAG
SAAFLAG ( *FLAG )
SQL-procedure-parameters:
Ê
*CURRENT *ENDACTGRP
TGTRLS ( VxRxMx ) CLOSQLCSR ( *ENDMOD )
Ê Ê
*NONE *NONE
OUTPUT ( *PRINT ) DBGVIEW ( *STMT )
*LIST
Ê Ê
*NAMING *USER
USRPRF ( *OWNER ) DYNUSRPRF ( *OWNER )
*USER
Ê
*NO
DLYPRP ( *YES )
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Run Structured Query Language Statement (RUNSQLSTM) command
processes a source file of SQL statements.
Parameters
SRCFILE
Specifies the qualified name of the source file that contains the SQL statements
to be run.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
source-file-name: Specify the name of the source file that contains the SQL
statements to be run. The source file can be a database file or an inline data
file.
*CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows updated, deleted, and inserted are locked until the end of the unit of
work (transaction). A row that is selected, but not updated, is locked until the
next row is selected. Uncommitted changes in other jobs cannot be seen.
*RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows selected, updated, deleted, and inserted are locked until the end of the
unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
are locked exclusively until the end of the unit of work (transaction).
NAMING
Specifies the naming convention used for naming objects in SQL statements.
*NO: A copy of the data is not used. If temporary copy of the data is required to
perform the query, an error message is returned.
ALWBLK
Specifies whether the database manager can use record blocking, and the
extent to which blocking can be used for read-only cursors.
Specifying *ALLREAD:
v Allows record blocking under commitment control level *CHG in addition to
the blocking allowed for *READ.
v Can improve the performance of almost all read-only cursors in programs,
but limits queries in the following ways:
– The Rollback (ROLLBACK) command, a ROLLBACK statement in host
languages, or the ROLLBACK HOLD SQL statement does not reposition a
read-only cursor when *ALLREAD is specified.
– Dynamic running of a positioned UPDATE or DELETE statement (for
example, using EXECUTE IMMEDIATE), cannot be used to update a row
in a cursor unless the DECLARE statement for the cursor includes the
FOR UPDATE clause.
*NONE: Rows are not blocked for retrieval of data for cursors.
Specifying *NONE:
v Guarantees that the data retrieved is current.
v May reduce the amount of time required to retrieve the first row of data for a
query.
v Stops the database manager from retrieving a block of data rows that is not
used by the program when only the first few rows of a query are retrieved
before the query is closed.
v Can degrade the overall performance of a query that retrieves a large
number of rows.
*READ: Records are blocked for read-only retrieval of data for cursors when:
v *NONE is specified on the COMMIT parameter, which indicates that
commitment control is not used.
v The cursor is declared with a FOR FETCH ONLY clause or there are no
dynamic statements that could run a positioned UPDATE or DELETE
statement for the cursor.
Specifying *READ can improve the overall performance of queries that meet the
above conditions and retrieve a large number of records.
10: Statement processing is stopped when error messages with a severity level
greater than 10 are received.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
command to determine the current date format for the job.
Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
specified on the DATFMT parameter.
*JOB: The date separator specified for the job is used. Use the Display Job
(DSPJOB) command to determine the current value for the job.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
*USA: The United States time format hh:mm xx is used, where xx is AM or PM.
Note: This parameter applies only when *HMS is specified on the TIMFMT
parameter.
*JOB: The time separator specified for the job is used. Use the Display Job
(DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
*JOB: The value used as the decimal point for numeric constants in SQL is the
representation of decimal point specified by the job running the statement.
*LANGIDSHR: The sort sequence table uses the same weight for multiple
characters, and is the shared-weight sort sequence table associated with the
language specified on the LANGID parameter.
*LANGIDUNQ: The unique-weight sort table for the language specified on the
LANGID parameter is used.
*HEX: A sort sequence table is not used. The hexadecimal values of the
characters are used to determine the sort sequence.
The name of the table name can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
*JOB: The LANGID value for the job is retrieved during the precompile.
collection-name: Specify the name of the collection identifier. This value is used
instead of the naming convention specified on the OPTION parameter.
FLAGSTD
Specifies the American National Standards Institute (ANSI) flagging function.
This parameter flags SQL statements to verify whether they conform to the
following standards.
ANSI X3.135-1992 entry
ISO 9075-1992 entry
FIPS 127.2 entry
*NONE: The SQL statements are not checked to determine whether they
conform to ANSI standards.
*ANS: The SQL statements are checked to determine whether they conform to
ANSI standards.
SAAFLAG
Specifies the IBM SQL flagging function. This parameter flags SQL statements
*NOFLAG: The SQL statements are not checked to determine whether they
conform to IBM SQL syntax.
*FLAG: The SQL statements are checked to determine whether they conform to
IBM SQL syntax.
PRTFILE
Specifies the qualified name of the printer device file to which the RUNSQLSTM
printout is directed. The file must have a minimum length of 132 bytes. If a file
with a record length of less than 132 bytes is specified, information is lost.
The name of the printer file can be qualified by one of hte following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
printer-file-name: Specify the name of the printer device file to which the
RUNSQLSTM printout is directed.
In the examples given for the *CURRENT value, and when specifying the
release-level value, the format VxRxMx is used to specify the release, where Vx
is the version, Rx is the release, and Mx is the modification level. For example,
V2R3M0 is version 2, release 3, modification level 0.
Valid values depend on the current version, release, and modification level, and
they change with each new release. If you specify a release-level which is
earlier than the earliest release level supported by this command, an error
message is sent indicating the earliest supported release.
CLOSQLCSR
Specifies when SQL cursors are implicitly closed, SQL prepared statements are
implicitly discarded, and LOCK TABLE locks are released. SQL cursors are
explicitly closed when you issue the CLOSE, COMMIT, or ROLLBACK (without
HOLD) SQL statements.
*ENDACTGRP: SQL cursors are closed and SQL prepared statements are
implicitly discarded.
ENDMOD: SQL cursors are closed and SQL prepared statements are implicitly
discarded when the module is exited. LOCK TABLE locks are released when
the first SQL program on the call stack ends.
OUTPUT
Specifies whether the precompiler listing is generated.
*LIST: Generates the listing view for debugging the compiled module object.
USRPRF
Specifies the user profile that is used when the compiled program object is run,
including the authority that the program object has for each object in static SQL
statements. The profile of either the program owner or the program user is used
to control which objects can be used by the program object.
*USER: The profile of the user running the program object is used.
*OWNER: The user profiles of both the program owner and the program user
are used when the program is run.
DYNUSRPRF
Specifies the user profile to be used for dynamic SQL statements.
*OWNER: For local, dynamic SQL statements run under the profile of the
program’s owner. For distributed, dynamic SQL statements run under the profile
of the SQL package’s owner.
DLYPRP
Specifies whether the dynamic statement validation for a PREPARE statement
is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
validation improves performance by eliminating redundant validation.
Note: If you specify *YES, performance is not improved if the INTO clause is
used on the PREPARE statement or if a DESCRIBE statement uses the
dynamic statement before an OPEN is issued for the statement.
Example
RUNSQLSTM SRCFILE(MYLIB/MYFILE) SRCMBR(MYMBR)
This command processes the SQL statements in member MYMBR found in file
MYFILE in library MYLIB.
ÊÊ STRSQL Ê
*NC *SYS
*NONE NAMING( *SQL )
COMMIT( *CHG )
*UR
*CS
*RS
*ALL
*RR
(1)
Ê Ê
*ALL *ALWAYS
LISTTYPE( *SQL ) REFRESH( *FORWARD )
Ê Ê
*YES *JOB
ALWCPYDTA( *OPTIMIZE ) DATFMT( *USA )
*NO *ISO
*EUR
*JIS
*MDY
*DMY
*YMD
*JUL
Ê Ê
(2) *JOB *HMS
DATSEP( *BLANK ) TIMFMT( *USA )
’/’ *ISO
’.’ *EUR
’,’ *JIS
’-’
’ ’
Ê Ê
(3) *JOB *SYSVAL
TIMSEP( *BLANK ) DECPNT( *PERIOD )
’:’ *COMMA
’.’ *JOB
’,’
’ ’
Ê Ê
(4) *NONE
PGMLNG( *C )
*CBL
*PLI
*RPG
*FTN
Ê Ê
(5) (6) *QUOTESQL
SQLSTRDLM( *APOSTSQL )
Ê ÊÍ
*JOB
LANGID( *JOBRUN )
language-ID
Notes:
1. All parameters preceding this point can be specified in positional form.
2. DATSEP is only valid when *MDY, *DMY, *YMD, or *JUL is specified on the
DATFMT parameter.
3. TIMSEP is only valid when TIMFMT(*HMS) is specified.
4. PGMLNG and SQLSTRDLM are valid only when PROCESS(*SYN) is specified.
5. PGMLNG and SQLSTRDLM are valid only when PROCESS(*SYN) is specified.
6. SQLSTRDLM is valid only when PGMLNG(*CBL) is specified.
Purpose
The Start Structured Query Language (STRSQL) command starts the interactive
Structured Query Language (SQL) program. The program starts the statement entry
of the interactive SQL program which immediately shows the Enter SQL Statements
display. This display allows the user to build, edit, enter, and run an SQL statement
in an interactive environment. Messages received during the running of the program
are shown on this display.
Parameters
COMMIT
Specifies whether the SQL statements are run under commitment control.
*CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
*RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows selected, updated, deleted, and inserted are locked until the end of the
unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
are locked exclusively until the end of the unit of work (transaction).
Note: The default for this parameter for the CRTSQLXXX commands (when
XXX=CI, CPPI, CBL, FTN, PLI, CBLI, RPG or RPGI) is *CHG.
NAMING
Specifies the naming convention used for naming objects in SQL statements.
*RUN: The statements are syntax checked, data checked, and then run.
*VLD: The statements are syntax checked and data checked, but not run.
The name of the collection list can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
*USRLIBL: Only the libraries in the user portion of the job’s library list are
searched.
*ALL: All libraries in the system, including QSYS, are searched.
*ALLUSR: All user libraries are searched. All libraries with names that do
not begin with the letter Q are searched except for the following:
#CGULIB #DFULIB #RPGLIB #SEULIB
#COBLIB #DSULIB #SDALIB
*FORWARD: Data is refreshed only during forward scrolling to the end of the
data for the first time. When scrolling backward, a copy of the data already
viewed is shown.
ALWCPYDTA
Specifies whether a copy of the data can be used in a SELECT statement. If
COMMIT(*ALL) is specified, SQL run time ignores the ALWCPYDTA value and
uses current data.
*OPTIMIZE: The system determines whether to use the data retrieved from the
database or to use a copy of the data. The determination is based on which will
provide the best performance.
*NO: A copy of the data is not allowed. If a temporary copy of the data is
required to perform the query, an error message is returned.
DATFMT
Specifies the date format used in SQL statements.
*MDY: The month, day, and year date format (mm/dd/yy) is used.
*DMY: The day, month, and year date format (dd/mm/yy) is used.
*YMD: The year, month, and day date format (yy/mm/dd) is used.
*JOB: The date separator specified on the job attribute is used. If the user
specifies *JOB on a new interactive SQL session, the current value is stored
and used. Later changes to the job’s date separator are not detected by
interactive SQL.
’ ’: A blank ( ) is used.
TIMFMT
Specifies the time format used in SQL statements.
*USA: The United States time format (hh:mm xx, where xx is AM or PM) is
used.
*JIS: The Japanese Industry Standard Christian Era time format (hh:mm:ss) is
used.
TIMSEP
Specifies the time separator used in SQL statements.
*JOB: The time separator specified on the job attribute is used. If the user
specifies *JOB on a new interactive SQL session, the current value is stored
and used. Later changes to the job’s time separator are not detected by
interactive SQL.
’ ’: A blank ( ) is used.
DECPNT
Specifies the kind of decimal point to use.
*JOB: The value used as the decimal point for numeric constants in SQL is the
representation of decimal point specified for the job running the statement.
*SYSVAL: The decimal point is extracted from the system value. If the user
specifies *SYSVAL on a new interactive SQL session, the current value is
stored and used. Later changes to the system’s time separator are not detected
by interactive SQL.
*CBL: Syntax checking is done according to the COBOL language syntax rules.
*PLI: Syntax checking is done according to the PL/I language syntax rules.
*RPG: Syntax checking is done according to the RPG language syntax rules.
*JOBRUN: The SRTSEQ value for the job is retrieved each time the user starts
interactive SQL.
*LANGIDSHR: The shared-weight sort table for the language specified on the
LANGID parameter is used.
*HEX: A sort sequence table is not used. The hexadecimal values of the
characters are used to determine the sort sequence.
The name of the table name can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
table-name: Specify the name of the sort sequence table to be used with the
interactive SQL session.
LANGID
Specifies the language identifier to be used when SRTSEQ(*LANGIDUNQ) or
SRTSEQ(*LANGIDSHR) is specified.
*JOBRUN: The LANGID value for the job is retrieved each time interactive SQL
is started.
Example
STRSQL PROCESS(*SYN) NAMING(*SQL)
DECPNT(*COMMA) PGMLNG(*CBL)
SQLSTRDLM(*APOSTSQL)
This command starts an interactive SQL session that checks only the syntax of SQL
statements. The character set used by the syntax checker uses the COBOL
language syntax rules. The SQL naming convention is used for this session. The
decimal point is represented by a comma, and the SQL string delimiter is
represented by an apostrophe.
Access plans
The SQL C for AS/400 precompiler generates access plan structures that are for
use with non-ILE programs.
Note: DATE, TIME, and TIMESTAMP columns generate character host variable
definitions. They are treated by SQL with the same comparison and
assignment rules as a DATE, TIME, and TIMESTAMP column. For example,
a date host variable can only compared against a DATE column or a
character string which is a valid representation of a date.
Although packed, zoned, and binary (with non-zero scale fields) are mapped to
character fields in C, SQL will treat these fields as numeric. By using the extended
program model (EPM) routines, you can manipulate these fields to convert zoned
and packed decimal data. For more information, see the ILE C for AS/400
Language Reference book.
*CURLIB/
ÊÊ CRTSQLC PGM( program-name ) Ê
library-name/
Ê Ê
*LIBL/ QCSRC
SRCFILE( source-file-name )
*CURLIB/
library-name/
(1)
Ê Ê
*PGM
SRCMBR( source-file-member-name )
Ê Ê
OPTION( OPTION Details ) *CURRENT
TGTRLS( *PRV )
VxRxMx
Ê Ê
*LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
Ê Ê
*YES *READ
ALWCPYDTA( *OPTIMIZE ) ALWBLK( *NONE )
*NO *ALLREAD
Ê Ê
*NO 10
DLYPRP( *YES ) GENLVL( severity-level )
Ê Ê
*SRCFILE *JOB
MARGINS( left-right ) DATFMT( *USA )
*ISO
*EUR
*JIS
*MDY
*DMY
*YMD
*JUL
Ê Ê
*JOB *HMS
DATSEP( '/' ) TIMFMT( *USA )
'.' *ISO
',' *EUR
'-' *JIS
' '
*BLANK
Ê Ê
*JOB
TIMSEP( ':' )
'.'
','
' '
*BLANK
Ê Ê
*YES *NONE
REPLACE( *NO ) RDB( relational-database-name )
Ê Ê
*CURRENT *NONE
USER( user-name ) PASSWORD( password )
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 805
Ê Ê
*DUW *NONE
RDBCNNMTH( *RUW ) DFTRDBCOL( collection-name )
Ê Ê
*PGMLIB/ *PGM
SQLPKG( package-name )
library-name/
Ê Ê
*NOFLAG *NONE
SAAFLAG( *FLAG ) FLAGSTD( *ANS )
Ê Ê
*LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
Ê Ê
*JOB
SRTSEQ( *JOBRUN )
*LANGIDUNQ
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
Ê Ê
*JOB *USER
LANGID( *JOBRUN ) DYNUSRPRF( *OWNER )
language-ID
Ê Ê
QTEMP/ QSQLTEMP
TOSRCFILE source-file-name )
*LIBL/
*CURLIB/
library-name/
Ê ÊÍ
*SRCMBRTXT
TEXT( *BLANK )
'description'
OPTION Details:
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Create Structured Query Language C (CRTSQLC) command calls the
Structured Query Language (SQL) precompiler which precompiles C source
containing SQL statements, produces a temporary source member.
Parameters
PGM
Specifies the qualified name of the compiled program.
The name of the compiled C program can be qualified by one of the following
library values:
*CURLIB: The compiled C program is created in the current library for the
job. If no library is specified as the current library for the job, the QGPL
library is used.
library-name: Specify the name of the library where the compiled C program
is created.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
QCSRC: If the source file name is not specified, the IBM-supplied source file
QCSRC contains the C source.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 807
source-file-name: Specify the name of the source file that contains the C
source.
SRCMBR
Specifies the name of the source file member that contains the C source. This
parameter is specified only if the source file name in the SRCFILE parameter is
a database file. If this parameter is not specified, the PGM name specified on
the PGM parameter is used.
*PGM: Specifies that the C source is in the member of the source file that has
the same name as that specified on the PGM parameter.
*NOGEN: The precompiler does not call the C compiler, and a program and
SQL package are not created.
*JOB: The value used as the decimal point for numeric constants in SQL is the
representation of decimal point specified for the job at precompile time.
*PERIOD: The value used as the decimal point for numeric constants in SQL
statements is a period.
*SYSVAL: The value used as the decimal point for numeric constants in SQL
statements is the QDECFMT system value.
Note: If QDECFMT specifies that the value used as the decimal point is a
comma, any numeric constants in lists (such as in the SELECT clause or
the VALUES clause) must be separated by a comma followed by a
blank. For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) in which the decimal point is a period.
Note: Any numeric constants in lists (such as in the SELECT clause or the
VALUES clause) must be separated by a comma followed by a blank.
For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) where the decimal point is a period.
*SECLVL: Second-level text with replacement data is added for all messages
on the listing.
*DEBUG: Symbolic EPM debug information is stored with the program. This
option is passed to the compiler and does not affect the SQL precompiler.
*CNULRQD: Output character and graphic host variables always contain the
NUL-terminator. If there is not enough space for the NUL-terminator, the data is
truncated and the NUL-terminator is added. Input character and graphic host
variables require a NUL-terminator.
*NOEVENTF: The compiler will not produce an event file for use by
CoOperative Development Environment/400 (CODE/400).
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 809
TGTRLS
Specifies the release of the operating system on which the user intends to use
the object being created.
In the examples given for the *CURRENT and *PRV values, and when
specifying the release-level value, the format VxRxMx is used to specify the
release, where Vx is the version, Rx is the release, and Mx is the modification
level. For example, V2R3M0 is version 2, release 3, modification level 0.
*PRV: The object is to be used on the previous release with modification level 0
of the operating system. For example, if V2R3M5 is running on the user’s
system, *PRV means the user intends to use the object on a system with
V2R2M0 installed. The user can also use the object on a system with any
subsequent release of the operating system installed.
release-level: Specify the release in the format VxRxMx. The object can be
used on a system with the specified release or with any subsequent release of
the operating system installed.
Valid values depend on the current version, release, and modification level, and
they change with each new release. If you specify a release-level which is
earlier than the earliest release level supported by this command, an error
message is sent indicating the earliest supported release.
INCFILE
Specifies the qualified name of the source file that contains members included
in the program with any SQL INCLUDE statement.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
source-file-name: Specify the name of the source file that contains the source
file members specified on any SQL INCLUDE statement. The record length of
the source file specified here must be no less than the record length of the
source file specified on the SRCFILE parameter.
COMMIT
Specifies whether SQL statements in the compiled program are run under
*ENDPGM: SQL cursors are closed and SQL prepared statements are
discarded when the program ends. LOCK TABLE locks are released when the
first SQL program on the call stack ends.
*ENDSQL: SQL cursors remain open between calls and can be fetched without
running another SQL OPEN. One of the programs higher on the call stack must
have run at least one SQL statement. SQL cursors are closed, SQL prepared
statements are discarded, and LOCK TABLE locks are released when the first
SQL program on the call stack ends. If *ENDSQL is specified for a program that
is the first SQL program called (the first SQL program on the call stack), the
program is treated as if *ENDPGM was specified.
*ENDJOB: SQL cursors remain open between calls and can be fetched without
running another SQL OPEN. The programs higher on the call stack do not need
to have run SQL statements. SQL cursors are left open, SQL prepared
statements are preserved, and LOCK TABLE locks are held when the first SQL
program on the call stack ends. SQL cursors are closed, SQL prepared
statements are discarded, and LOCK TABLE locks are released when the job
ends.
ALWCPYDTA
Specifies whether a copy of the data can be used in a SELECT statement.
*OPTIMIZE: The system determines whether to use the data retrieved directly
from the database or to use a copy of the data. The decision is based on which
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 811
method provides the best performance. If COMMIT is *CHG or *CS and
ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
data is used only when it is necessary to run a query.
*NO: A copy of the data is not used. If a temporary copy of the data is required
to perform the query, an error message is returned.
ALWBLK
Specifies whether the database manager can use record blocking, and the
extent to which blocking can be used for read-only cursors.
*ALLREAD: Rows are not blocked for retrieval of data for cursors.
Specifying *NONE:
v Guarantees that the data retrieved is current.
v May reduce the amount of time required to retrieve the first row of data for a
query.
v Stops the database manager from retrieving a block of data rows that is not
used by the program when only the first few rows of a query are retrieved
before the query is closed.
v Can degrade the overall performance of a query that retrieves a large
number of rows.
*NONE: Rows are not blocked for retrieval of data for cursors.
Specifying *NONE:
v Guarantees that the data retrieved is current.
v May reduce the amount of time required to retrieve the first row of data for a
query.
v Stops the database manager from retrieving a block of data rows that is not
used by the program when only the first few rows of a query are retrieved
before the query is closed.
v Can degrade the overall performance of a query that retrieves a large
number of rows.
*READ: Records are blocked for read-only retrieval of data for cursors when:
v *NONE is specified on the COMMIT parameter, which indicates that
commitment control is not used.
v The cursor is declared with a FOR FETCH ONLY clause or there are no
dynamic statements that could run a positioned UPDATE or DELETE
statement for the cursor.
Specifying *READ can improve the overall performance of queries that meet the
above conditions and retrieve a large number of records.
DLYPRP
Specifies whether the dynamic statement validation for a PREPARE statement
is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
validation improves performance by eliminating redundant validation.
Note: If you specify *YES, performance is not improved if the INTO clause is
used on the PREPARE statement or if a DESCRIBE statement uses the
dynamic statement before an OPEN is issued for the statement.
GENLVL
Specifies the severity level at which the create operation fails. If errors occur
that have a severity level greater than or equal to this value, the operation
ends.
*SRCFILE:The file member margin values that you specified on the SRCMBR
parameter are used. If the member is an SQLC, SQLCLE, C, or CLE source
type, the margin values are the values specified on the source entry utility
(SEU) services display. If the member is a different source type, the margin
values are the default values of 1 and 80.
left: Specify the beginning position for the statements. Valid values range from 1
through 80.
right: Specify the ending position for the statements. Valid values range from 1
through 80.
DATFMT
Specifies the format used when accessing date result columns. All output date
fields are returned in the specified format. For input date strings, the specified
value is used to determine whether the date is specified in a valid format.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 813
*JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
command to determine the current date format for the job.
Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
specified on the DATFMT parameter.
*JOB: The date separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
Note: This parameter applies only when *HMS is specified on the TIMFMT
parameter.
*JOB: The time separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
*YES: A new program or SQL package is created, and any existing program or
SQL package of the same name and type in the specified library is moved to
QRPLOBJ.
*NO: A new program or SQL package is not created if an object of the same
name and type already exists in the specified library.
RDB
Specifies the name of the relational database where the SQL package object is
created.
*NONE: An SQL package object is not created. The program object is not a
distributed program and the Create Structured Query Language Package
(CRTSQLPKG) command cannot be used.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 815
USER
Specifies the user name sent to the remote system when starting the
conversation. This parameter is valid only when RDB is specified.
*CURRENT: The user profile under which the current job is running is used.
user-name: Specify the user name to be used for the application server job.
PASSWORD
Specifies the password to be used on the remote system. This parameter is
valid only if RDB is specified.
password: Specify the password of the user name specified on the USER
parameter.
RDBCNNMTH
Specifies the semantics used for CONNECT statements. Refer to the SQL
Reference, SC41-3612 book for more information.
*RUW: CONNECT (Type 1) semantics are used to support remote unit of work.
Consecutive CONNECT statements result in the previous connection being
disconnected before a new connection is established.
DFTRDBCOL
Specifies the collection name used for the unqualified names of tables, views,
indexes, and SQL packages. This parameter applies only to static SQL
statements.
collection-name: Specify the name of the collection identifier. This value is used
instead of the naming convention specified on the OPTION parameter.
SQLPKG
Specifies the qualified name of the SQL package created on the relational
database specified on the RDB parameter of this command.
*PGM: The name of the SQL package is the same as the program name.
package-name: Specify the name of the SQL package. If the remote system is
not an AS/400 system, no more than 8 characters can be specified.
SAAFLAG
Specifies the IBM SQL flagging function. This parameter flags SQL statements
*NOFLAG: The precompiler does not check to see whether SQL statements
conform to IBM SQL standards.
*NONE: The precompiler does not check to see whether SQL statements
conform to ANSI standards.
The name of the printer file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
printer-file-name: Specify the name of the printer device file to which the
precompiler printout is directed.
SRTSEQ
Specifies the sort sequence table to be used for string comparisons in SQL
statements.
*JOB: The SRTSEQ value for hte job is retrieved during the precompile.
*JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
For distributed applications, SRTSEQ(*JOBRUN) is valid only when
LANGID(*JOBRUN) is also specified.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 817
*LANGIDUNQ: The unique-weight sort table for the language specified on the
LANGID parameter is used.
*LANGIDSHR: The shared-weight sort table for the language specified on the
LANGID parameter is used.
*HEX: A sort sequence table is not used. The hexadecimal values of the
characters are used to determine the sort sequence.
The name of the sort sequence table can be qualified by one of the following
library values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
*JOB: The LANGID value for the job is received during the precompile.
*JOBRUN: The LANGID value for the job is retrieved when the program is run.
For distributed applications, LANGID(*JOBRUN) is valid only when
SRTSEQ(*JOBRUN) is also specified.
*USER: Local dynamic SQL statements are run under the user profile of the
job. Distributed dynamic SQL statements are run under the user profile of the
application server job.
*OWNER: Local dynamic SQL statements are run under the user profile of the
program’s owner. Distributed dynamic SQL statements are run under the user
profile of the SQL package’s owner.
| TOSRCFILE
| Specifies the qualified name of the source file that is to contain the output
| source member that has been processed by the SQL precompiler. If the
| specified source file is not found, it will be created. The output member will
| have the same name as the name that is specified for the SRCMBR parameter.
| source-file-name: Specify the name of the source file to contain the output
| source member.
TEXT
Specifies the text that briefly describes the function. More information on this
parameter is in Appendix A, ″Expanded Parameter Descriptions″ in the CL
Reference book.
*SRCMBRTXT: The text is taken from the source file member being used to
create the FORTRAN program. Text can be added or changed for a database
source member by using the Start Source Entry Utility (STRSEU) command, or
by using either the Add Physical File Member (ADDPFM) or Change Physical
File Member (CHGPFM) command. If the source file is an inline file or a device
file, the text is blank.
Example
CRTSQLC PAYROLL TEXT('Payroll Program')
This command runs the SQL precompiler which precompiles the source and stores
the changed source in member PAYROLL in file QSQLTEMP in library QTEMP.
*CURLIB/
ÊÊ CRTSQLFTN PGM( program-name ) Ê
library-name/
Ê Ê
*LIBL/ QFTNSRC
SRCFILE( source-file-name )
*CURLIB/
library-name/
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 819
(1)
Ê Ê
*PGM
SRCMBR( source-file-member-name )
Ê Ê
OPTION( OPTION Details ) *CURRENT
TGTRLS( *PRV )
VxRxMx
Ê Ê
*LIBL/ *SRCFILE
INCFILE( source-file-name )
*CURLIB/
library-name/
Ê Ê
*UR *ENDPGM
*CHG CLOSQLCSR( *ENDSQL )
COMMIT( *ALL ) *ENDJOB
*RS
*CS
*NONE
*NC
*RR
Ê Ê
*YES *READ
ALWCPYDTA( *OPTIMIZE ) ALWBLK( *NONE )
*NO *ALLREAD
Ê Ê
*NO 10
DLYPRP( *YES ) GENLVL( severity-level )
Ê Ê
*JOB *JOB
DATFMT( *USA ) DATSEP( '/' )
*ISO '.'
*EUR ','
*JIS '-'
*MDY ' '
*DMY *BLANK
*YMD
*JUL
Ê Ê
*YES
REPLACE( *NO )
Ê Ê
*LOCAL *CURRENT
RDB( relational-database-name ) USER( user-name )
*NONE
Ê Ê
*NONE *DUW
PASSWORD( password ) RDBCNNMTH( *RUW )
Ê Ê
*NONE
DFTRDBCOL( collection-name )
Ê Ê
*PGMLIB/ *PGM
SQLPKG( package-name )
library-name/
Ê Ê
*NOFLAG *NONE
SAAFLAG( *FLAG ) FLAGSTD( *ANS )
Ê Ê
*LIBL/ QSYSPRT
PRTFILE( printer-file-name )
*CURLIB/
library-name/
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 821
Ê Ê
*JOB
SRTSEQ( *JOBRUN )
*LANGIDUNQ
*LANGIDSHR
*HEX
*LIBL/
table-name
*CURLIB/
library-name/
Ê Ê
*JOB *NAMING
LANGID( *JOBRUN ) USRPRF( *OWNER )
language-ID *USER
Ê Ê
*USER
DYNUSRPRF( *OWNER )
Ê Ê
QTEMP/ QSQLTEMP
TOSRCFILE( source-file-name )
*LIBL/
*CURLIB/
library-name/
Ê ÊÍ
*SRCMBRTXT
TEXT( *BLANK )
'description'
OPTION Details:
*NOSRC
*NOSOURCE *NOXREF *GEN *PERIOD *SYS
Ê
*SOURCE *XREF *NOGEN *JOB *SQL
*SRC *SYSVAL
*COMMA
*NOSECLVL *NODEBUG
Ê
*SECLVL *DEBUG
Notes:
1. All parameters preceding this point can be specified in positional form.
Purpose
The Create Structured Query Language FORTRAN (CRTSQLFTN) command calls
the Structured Query Language (SQL) precompiler which precompiles FORTRAN
Parameters
PGM
Specifies the qualified name of the compiled program.
The name of the compiled FORTRAN program can be qualified by one of the
following library values:
*CURLIB: The compiled FORTRAN program is created in the current library for
the job. If no library is specified as the current library for the job, the QGPL
library is used.
library-name: Specify the name of hte library where the compiled FORTRAN
program is created.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
QFTNSRC: If the source file name is not specified, the IBM-supplied source file
QFTNSRC contains the FORTRAN source.
source-file-name: Specify the name of the source file that contains the
FORTRAN source.
SRCMBR
Specifies the name of the source file member that contains the C source. This
parameter is specified only if the source file name in the SRCFILE parameter is
a database file. If this parameter is not specified, the PGM name specified on
the PGM parameter is used.
*PGM: Specifies that the FORTRAN source is in the member of the source file
that has the same name as that specified on the PGM parameter.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 823
*NOSOURCE: or *NOSRC: A source printout is not produced by the
precompiler unless errors are detected during precompile or create package.
*GEN:
*NOGEN: The precompiler does not call the FORTRAN compiler, and a
program and SQL package are not created.
*PERIOD: The value used as the decimal point for numeric constants used in
SQL statements is a period.
*JOB The value used as the decimal point for numeric constants in SQL is the
representation of decimal point specified for the job at precompile time.
*SYSVAL: The value used as the decimal point for numeric constants in SQL
statements is the QDECFMT system value.
Note: If QDECFMT specifies that the value used as the decimal point is a
comma, any numeric constants in lists (such as in the SELECT clause or
the VALUES clause) must be separated by a comma followed by a
blank. For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) in which the decimal point is a period.
*COMMA: The value used as the decimal point for numeric constants in SQL
statements is a comma.
Note: Any numeric constants in lists (such as in the SELECT clause or the
VALUES clause) must be separated by a comma followed by a blank.
For example, VALUES(1,1, 2,23, 4,1) is equivalent to
VALUES(1.1,2.23,4.1) where the decimal point is a period.
*DEBUG: Symbolic EPM debug information is stored with the program. This
option is passed to the compiler and does not affect the SQL precompiler.
TGTRLS
Specifies the release of the operating system on which the user intends to use
the object being created.
In the examples given for the *CURRENT and *PRV values, and when
specifying the release-level value, the format VxRxMx is used to specify the
release, where Vx is the version, Rx is the release, and Mx is the modification
level. For example, V2R3M0 is version 2, release 3, modification level 0.
*PRV: The object is to be used on the previous release with modification level 0
of the operating system. For example, if V2R3M5 is running on the user’s
system, *PRV means the user intends to use the object on a system with
V2R2M0 installed. The user can also use the object on a system with any
subsequent release of the operating system installed.
release-level: Specify the release in the format VxRxMx. The object can be
used on a system with the specified release or with any subsequent release of
the operating system installed.
Valid values depend on the current version, release, and modification level, and
they change with each new release. If you specify a release-level which is
earlier than the earliest release level supported by this command, an error
message is sent indicating the earliest supported release.
INCFILE
Specifies the qualified name of the source file that contains members included
in the program with any SQL INCLUDE statement.
The name of the source file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 825
library-name: Specify the name of the library to be searched.
source-file-name: Specify the name of the source file that contains the source
file members specified on any SQL INCLUDE statement. The record length of
the source file the user specifies here must be no less than the record length of
the source file specified on the SRCFILE parameter.
COMMIT
Specifies whether SQL statements in the compiled program are run under
commitment control. Files referred to in the host language source are not
affected by this option. Only SQL tables, SQL views, and SQL packages
referred to in SQL statements are affected.
*CS: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows updated, deleted, and inserted are locked until the end of the unit of
work (transaction). A row that is selected, but not updated, is locked until the
next row is selected. Uncommitted changes in other jobs cannot be seen.
*RR: Specifies the objects referred to in SQL ALTER, CALL, COMMENT ON,
CREATE, DROP, GRANT, LABEL ON, RENAME, and REVOKE statements and
the rows selected, updated, deleted, and inserted are locked until the end of the
unit of work (transaction). Uncommitted changes in other jobs cannot be seen.
All tables referred to in SELECT, UPDATE, DELETE, and INSERT statements
are locked exclusively until the end of the unit of work (transaction).
CLOSQLCSR
Specifies when SQL cursors are implicitly closed, SQL prepared statements are
implicitly discarded, and LOCK TABLE locks are released. SQL cursors are
explicitly closed when you issue the CLOSE, COMMIT, or ROLLBACK (without
HOLD) SQL statements.
*ENDPGM: SQL cursors are closed and SQL prepared statements are
discarded when the program ends. LOCK TABLE locks are released when the
first SQL program on the call stack ends.
*ENDJOB: SQL cursors remain open between calls and can be fetched without
running another SQL OPEN. The programs higher on the call stack do not need
to have run SQL statements. SQL cursors are left open, SQL prepared
statements are preserved, and LOCK TABLE locks are held when the first SQL
program on the call stack ends. SQL cursors are closed, SQL prepared
statements are discarded, and LOCK TABLE locks are released when the job
ends.
ALWCPYDTA
Specifies whether a copy of the data can be used in a SELECT statement.
*OPTIMIZE: The system determines whether to use the data retrieved directly
from the database or to use a copy of the data. The decision is based on which
method provides the best performance. If COMMIT is *CHG or *CS and
ALWBLK is not *ALLREAD, or if COMMIT is *ALL or *RR, then a copy of the
data is used only when it is necessary to run a query.
*NO: A copy of the data is not allowed. If a temporary copy of the data is
required to perform the query, an error message is returned.
ALWBLK
Specifies whether the database manager can use record blocking, and the
extent to which blocking can be used for read-only cursors.
Specifying *ALLREAD:
v Allows record blocking under commitment control level *CHG in addition to
the blocking allowed for *READ.
v Can improve the performance of almost all read-only cursors in programs,
but limits queries in the following ways:
– The Rollback (ROLLBACK) command, a ROLLBACK statement in host
languages, or the ROLLBACK HOLD SQL statement does not reposition a
read-only cursor when *ALLREAD is specified.
– Dynamic running of a positioned UPDATE or DELETE statement (for
example, using EXECUTE IMMEDIATE), cannot be used to update a row
in a cursor unless the DECLARE statement for the cursor includes the
FOR UPDATE clause.
*NONE: Rows are not blocked for retrieval of data for cursors.
Specifying *NONE:
v Guarantees that the data retrieved is current.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 827
v May reduce the amount of time required to retrieve the first row of data for a
query.
v Stops the database manager from retrieving a block of data rows that is not
used by the program when only the first few rows of a query are retrieved
before the query is closed.
v Can degrade the overall performance of a query that retrieves a large
number of rows.
*READ: Records are blocked for read-only retrieval of data for cursors when:
v *NONE is specified on the COMMIT parameter, which indicates that
commitment control is not used.
v The cursor is declared with a FOR FETCH ONLY clause or there are no
dynamic statements that could run a positioned UPDATE or DELETE
statement for the cursor.
Specifying *READ can improve the overall performance of queries that meet the
above conditions and retrieve a large number of records.
DLYPRP
Specifies whether the dynamic statement validation for a PREPARE statement
is delayed until an OPEN, EXECUTE, or DESCRIBE statement is run. Delaying
validation improves performance by eliminating redundant validation.
Note: If you specify *YES, performance is not improved if the INTO clause is
used on the PREPARE statement or if a DESCRIBE statement uses the
dynamic statement before an OPEN is issued for the statement.
GENLVL
Specifies the severity level at which the create operation fails. If errors occur
that have a severity level greater than or equal to this value, the operation
ends.
*JOB: The format specified for the job is used. Use the Display Job (DSPJOB)
command to determine the current date format for the job.
Note: This parameter applies only when *JOB, *MDY, *DMY, *YMD, or *JUL is
specified on the DATFMT parameter.
*JOB: The date separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
Note: An input date string that uses the format *USA, *ISO, *EUR, or *JIS is
always valid.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 829
If a relational database is specified on the RDB parameter and the
database is on a system that is not another AS/400 system, the time
format must be *USA, *ISO, *EUR, *JIS, or *HMS with a time separator
of colon or period.
*USA: The United States time format (hh:mm xx) is used, where xx is AM or
PM.
Note: This parameter applies only when *HMS is specified on the TIMFMT
parameter.
*JOB: The time separator specified for the job at precompile time is used. Use
the Display Job (DSPJOB) command to determine the current value for the job.
’ ’: A blank ( ) is used.
*YES: A new program or SQL package is created, and any existing program or
SQL package of the same name and type in the specified library is moved to
QRPLOBJ.
*NO: A new program or SQL package is not created if an object of the same
name and type already exists in the specified library.
RDB
Specifies the name of the relational database where the SQL package object is
created.
*NONE: An SQL package object is not created. The program object is not a
distributed program and the Create Structured Query Language Package
(CRTSQLPKG) command cannot be used.
USER
Specifies the user name sent to the remote system when starting the
conversation. This parameter is valid only when RDB is specified.
*CURRENT: The user profile under which the current job is running is used.
user-name: Specify the user name being used for the application server job.
PASSWORD
Specifies the password to be used on the remote system. This parameter is
valid only if RDB is specified.
password: Specify the password of the user name specified on the USER
parameter.
RDBCNNMTH
Specifies the semantics used for CONNECT statements. Refer to the SQL
Reference book for more information.
*RUW: CONNECT (Type 1) semantics are used to support remote unit of work.
Consecutive CONNECT statements result in the previous connection being
disconnected before a new connection is established.
DFTRDBCOL
Specifies the collection name used for the unqualified names of tables, views,
indexes, and SQL packages. This parameter applies only to static SQL
statements.
collection-name: Specify the name of the collection identifier. This value is used
instead of the naming convention specified on the OPTION parameter.
SQLPKG
Specifies the qualified name of the SQL package created on the relational
database specified on the RDB parameter of this command.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 831
*PGMLIB: The package is created in the library with the same name as the
library containing the program.
library-name: Specify the name of the library where the package is created.
*NOFLAG: The precompiler does not check to see whether SQL statements
conform to IBM SQL syntax.
*NONE: The precompiler does not check to see whether SQL statements
conform to ANSI standards.
The name of the printer file can be qualified by one of the following library
values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
printer-file-name: Specify the name of the printer device file to which the
precompiler printout is directed.
SRTSEQ
Specifies the sort sequence table to be used for string comparisons in SQL
statements.
Specifies the sort sequence table to be used for string comparisons in SQL
statements.
*JOB: The SRTSEQ value for the job is retrieved during the precompile.
*JOBRUN: The SRTSEQ value for the job is retrieved when the program is run.
For distributed applications, SRTSEQ(*JOBRUN) is valid only when
LANGID(*JOBRUN) is also specified.
*LANGIDUNQ: The unique-weight sort table for the language specified on the
LANGID parameter is used.
*LANGIDSHR: The shared-weight sort table for the language specified on the
LANGID parameter is used.
*HEX: A sort sequence table is not used. The hexadecimal values of the
characters are used to determine the sort sequence.
The name of the sort sequence table can be qualified by one of the following
library values:
*LIBL: All libraries in the job’s library list are searched until the first match is
found.
*CURLIB: The current library for the job is searched. If no library is specified
as the current library for the job, the QGPL library is used.
library-name: Specify the name of the library to be searched.
*JOB: The LANGID value for the job is retrieved during the precompile.
*JOBRUN: The LANGID value for the job is retrieved when the program is run.
For distributed applications, LANGID(*JOBRUN) is valid only when
SRTSEQ(*JOBRUN) is also specified.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 833
*NAMING: The user profile is determined by the naming convention. If the
naming convention is *SQL, USRPRF(*OWNER) is used. If the naming
convention is *SYS, USRPRF(*USER) is used.
*USER: The profile of the user running the program object is used.
*OWNER: The user profiles of both the program owner and the program user
are used when the program is run.
DYNUSRPRF
Specifies the user profile used for dynamic SQL statements.
*USER: Local dynamic SQL statements are run under the user profile of the
job. Distributed dynamic SQL statements are run under the user profile of the
application server job.
*OWNER: Local dynamic SQL statements are run under the user profile of the
program’s owner. Distributed dynamic SQL statements are run under the user
profile of the SQL package’s owner.
| TOSRCFILE
| Specifies the qualified name of the source file that is to contain the output
| source member that has been processed by the SQL precompiler. If the
| specified source file is not found, it will be created. The output member will
| have the same name as the name that is specified for the SRCMBR parameter.
| source-file-name: Specify the name of the source file to contain the output
| source member.
TEXT
Specifies the text that briefly describes the LANGID. More information on this
parameter is in Appendix A, ″Expanded Parameter Descriptions″ in the CL
Reference (Abridged) book.
*SRCMBRTXT: The text is taken from the source file member being used to
create the FORTRAN program. Text can be added or changed for a database
source member by using the Start Source Entry Utility (STRSEU) command, or
by using either the Add Physical File Member (ADDPFM) or Change Physical
File Member (CHGPFM) command. If the source file is an inline file or a device
file, the text is blank.
Example
CRTSQLFTN PAYROLL TEXT('Payroll Program')
This command runs the SQL precompiler, which precompiles the source and stores
the changed source in member PAYROLL in file QSQLTEMP in library QTEMP. The
FORTRAN compiler is called to create program PAYROLL in the current library by
using the source member created by the SQL precompiler.
Appendix E. Using the C for AS/400 and FORTRAN for AS/400 Precompilers 835
836 DB2 UDB for AS/400 SQL Programming V4R4
Appendix F. Coding SQL Statements in FORTRAN
Applications
This appendix describes the unique application and coding requirements for
embedding SQL statements in a FORTRAN/400 program. Requirements for host
variables are defined.
Or,
v An SQLCA (which contains an SQLCOD and SQLSTA variable).
The SQLCOD and SQLSTA (or SQLSTATE) values are set by the database
manager after each SQL statement is executed. An application can check the
SQLCOD or SQLSTA (or SQLSTATE) value to determine whether the last SQL
statement was successful.
The SQLCA can be coded in a FORTRAN program either directly or by using the
SQL INCLUDE statement. Using the SQL INCLUDE statement requests the
inclusion of a standard declaration:
EXEC SQL INCLUDE SQLCA
*
INTEGER*4 SQLCOD
C SQLERR(6)
INTEGER*2 SQLTXL
CHARACTER SQLERP*8,
C SQLWRN(0:7)*1,
C SQLWRX(1:3)*1,
The SQLCOD, SQLSTA, SQLSTATE, and SQLCA variables must be placed before
the first executable SQL statement. All executable SQL statements in a program
must be within the scope of the declaration of the SQLCOD, SQLSTA, SQLSTATE,
and SQLCA variables.
All SQL statements that can be run in a program must be within the scope of the
declaration of the SQLCOD variable or SQLCA variables.
Unlike the SQLCA, there can be more than one SQLDA in a program, and an
SQLDA can have any valid name.
Coding an SQLDA on the multiple-row FETCH statement using a row storage area
provides a technique to retrieve multiple rows on each FETCH statement. This
technique can improve an application’s performance if a large number of rows are
read by the application. For more information on using the FETCH statement, see
the DB2 UDB for AS/400 SQL Reference book.
Each SQL statement in a FORTRAN program must begin with EXEC SQL. The
EXEC SQL keywords must appear all on one line, but the remainder of the
statement can appear on the same line and on subsequent lines.
Example:
An SQL statement cannot be followed on the same line by another SQL statement
or by a FORTRAN statement.
FORTRAN does not require the use of blanks to delimit words within a statement,
but the SQL language does. The rules for embedded SQL follow the rules for SQL
syntax, which requires the use of one or more blanks as delimiters.
Comments
In addition to SQL comments (--), FORTRAN comments can be included within the
embedded SQL statements wherever a blank is allowed, except between the
keywords EXEC and SQL.
The comment extends to the end of the line. Comment lines can appear between
the lines of a continued SQL statement. The character (!) indicates a comment,
except when it appears in a character context or in column 6.
Debug Lines
Lines contain debug statements (’D’ or ’d’ in column 1) are treated as comments
lines by the precompiler.
Including Code
SQL statements or FORTRAN statements can be included by embedding the
following SQL statement at the point in the source code where the statements are
to be embedded:
EXEC SQL INCLUDE member-name
Margins
Code the SQL statements (starting with EXEC SQL) in coding columns 7 to 72.
Names
Any valid FORTRAN variable name can be used for a host variable and is subject
to the following restrictions:
Do not use host variable names or external entry names that begin with 'SQ', 'SQL',
'RDI', or 'DSN'. These names are reserved for the database manager.
Statement Labels
Executable SQL statements can have statement numbers associated with them,
specified in columns 1 to 5. However, during program preparation, a labelled SQL
statement causes a CONTINUE statement with that label to be generated before
the code runs the statement. A labelled SQL statement should not be the last
statement in a DO loop. Because CONTINUE statements can be run, SQL
statements that occur before the first statement that can be run in a FORTRAN
program (for example, INCLUDE and BEGIN DECLARE SECTION) should not be
labelled.
The FORTRAN statements that are used to define the host variables should be
preceded by a BEGIN DECLARE SECTION statement and followed by an END
DECLARE SECTION statement. If a BEGIN DECLARE SECTION and END
DECLARE SECTION are specified, all host variable declarations used in SQL
statements must be between the BEGIN DECLARE SECTION and the END
DECLARE SECTION statements. Note: LOB host variables are not supported in
FORTRAN.
All host variables within an SQL statement must be preceded with a colon (:).
The names of host variables should be unique within the program, even if the host
variables are in different blocks or procedures.
The declaration for a character host variable must not use an expression to define
the length of the character variable. The declaration for a character host variable
must not have an undefined length (for example, CHARACTER(*)).
An SQL statement that uses a host variable must be within the scope of the
statement in which the variable was declared.
ÊÊ INTEGER*2 Ê
INTEGER
*4
REAL
*4
REAL*8
DOUBLE PRECISION
Ê · variable-name ÊÍ
/ numeric-constant /
Character
ÊÊ CHARACTER Ê
*n
Ê · variable-name ÊÍ
*n / character-constant /
The following table can be used to determine the FORTRAN data type that is
equivalent to a given SQL data type.
Table 79. SQL Data Types Mapped to Typical FORTRAN Declarations
SQL Data Type FORTRAN Equivalent Explanatory Notes
SMALLINT INTEGER*2
INTEGER INTEGER*4
DECIMAL(p,s) or No exact equivalent Use REAL*8
NUMERIC(p,s)
FLOAT (single precision) REAL*4
FLOAT (double precision) REAL*8
CHAR(n) CHARACTER*n n is a positive integer from 1
to 32766.
VARCHAR(n) No exact equivalent Use a character host variable
large enough to contain the
largest expected VARCHAR
value.
GRAPHIC(n) Not supported Not supported
VARGRAPHIC(n) Not supported Not supported
DATE CHARACTER*n If the format is *USA, *JIS,
*EUR, or *ISO, n must be at
least 10 characters. If the
format is *YMD, *DMY, or
*MDY, n must be at least 8
characters. If the format is
*JUL, n must be at least 6
characters.
TIME CHARACTER*n n must be at least 6; to
include seconds, n must be at
least 8.
TIMESTAMP CHARACTER*n n must be at least 19. To
include microseconds at full
precision, n must be 26. If n
is less than 26, truncation
occurs on the microseconds
part.
See DB2 UDB for AS/400 SQL Reference book for more information on the use of
indicator variables.
Indicator variables are declared in the same way as host variables. The
declarations of the two can be mixed in any way that seems appropriate to the
programmer.
Example:
Index 849
character host variable (continued) COBOL program 262 (continued)
FORTRAN 231 file reference variable
ILE RPG for AS/400 314, 318 LOB 262, 316
PL/I 283 host structure
RPG for AS/400 300, 303 array indicator structure, declaring 272
Character Large OBjects 146 arrays, declaring 268
check constraints 99 declaring 264
check pending 107, 375 indicator array 268
checking syntax in interactive SQL 353 host variable 255
CHGPF command 33 character 258
CHGQRYA (Change Query Attributes) command 381 declaring 255, 261
CL_SCHED table 585 externally described 272
class schedule table 585 floating point 257
clause 47 graphic 259
AND 74 LOB 261, 315
DISTINCT 72 numeric 255
FROM 36 including code 254
GROUP BY indicator structure 276
example 40 indicator variable 276
HAVING 42 locator
INTO LOB 262, 316
example 32 margin 254
PREPARE statement, use with 202 multiple source programs 255
restriction 208 naming convention 254
NOT 74 REDEFINES 276
null value 45 sample program with SQL statements 613
OR 74 sequence numbers 254
ORDER BY 43 SQL 613
SELECT 38 SQL data types
SET 33 determining equivalent COBOL 274
USING DESCRIPTOR 213 SQLCA, declaring 251
VALUES 31 SQLCODE, declaring 251
WHENEVER NOT FOUND 60 SQLDA, declaring 252
WHERE SQLSTATE, declaring 251
character string 31 statement label 255
example 38, 213 WHENEVER statement 255
expression 39 coded character set conversion error 38
joining tables 76 coded character set identifier (CCSID) 217
multiple search condition within 74
coding examples, SQL statements in
NOT keyword 40
WHERE CURRENT OF 61 COBOL 613
CLI 2 ILE C 606
CLOBs (Character Large OBjects) ILE COBOL 613
uses and definition 146 ILE RPG for AS/400 program 634
PL/I 621
CLOSQLCSR parameter
REXX 640
effect on implicit disconnect 565
REXX applications 327
using 469
RPG for AS/400 628
COBOL program 272
BEGIN/END DECLARE SECTION 255 coding requirement
COBOL COPY statement 254, 272 C++ program
COBOL PROCESS statement 254 comment 228
coding SQL statements 251, 279 continuation 228
comment 253 host variable 230
compile-time option 254 including code 228
compiler parameters 340 margin 229
continuation 253 naming convention 229
Datetime host variable 263 null 229
debug lines 253 preprocessor sequence 229
dynamic SQL coding 252 statement label 229
error and warning message during a compile 344 trigraph 229
external file description 272 WHENEVER statement 230
Index 851
command (CL) 367, 835 (continued) command (CL) 1, 835 (continued)
Change Physical File (CHGPF) 367 RUNSQLSTM (Run SQL statements) 367
Change Query Attribute (CHGQRYA) command 402 RUNSQLSTM (Run SQL Statements) 361, 794
Change Query Attributes (CHGQRYA) 381 RVKOBJAUT (Revoke Object Authority) 365
CHGCLS (Change Class) 367 Send Program Message (SNDPGMMSG) 587
CHGJOB (Change Job) 367 Send User Message (SNDUSRMSG) 587
CHGLF (Change Logical File) 367 SNDPGMMSG (Send Program Message) 587
CHGPF (Change Physical File) 367 SNDUSRMSG (Send User Message) 587
CHGQRYA (Change Query Attribute) command 402 Start Commitment Control (STRCMTCTL) 370
CHGQRYA (Change Query Attributes) 381 Start Journal Access Path (STRJRNAP) 376
Convert SQL C++ (CVTSQLCPP) 781 STRCMTCTL (Start Commitment Control) 370
Create Duplicate Object (CRTDUPOBJ) 380 STRJRNAP (Start Journal Access Path) 376
Create Source Physical File (CRTSRCPF) STRSQL (Start SQL) 801
command 335 Trace Job (TRCJOB) 382, 461
Create SQL C++ (CRTSQLCPPI) 712 TRCJOB (Trace Job) 382, 461
Create SQL COBOL (CRTSQLCBL) 661 commands
Create SQL ILE C for AS/400 (CRTSQLCI) 695 End Database Monitor (ENDDBMON) 480
Create SQL ILE COBOL (CRTSQLCBLI) 678 Start Database Monitor (STRDBMON) 479
Create SQL ILE/RPG (CRTSQLRPGI) 761 comment
Create SQL Package (CRTSQLPKG) 557, 765 C 228
Create SQL PL/I (CRTSQLPLI) 728 C++ 228
Create SQL RPG (CRTSQLRPG) 744 COBOL 253
Create User Profile (CRTUSRPRF) 366 for RUNSQLSTM 361
CRTDUPOBJ (Create Duplicate Object) FORTRAN 839
command 380 getting 50
CRTUSRPRF (Create User Profile) 366 ILE RPG for AS/400 311
Delete Library (DLTLIB) 374 PL/I 281
Delete Override (DLTOVR) 459 REXX 328
Delete SQL Package (DLTSQLPKG) 557, 783 RPG for AS/400 299
Display Job (DSPJOB) 381 COMMENT ON statement
Display Journal (DSPJRN) 461 using, example 49
Display Message Description (DSPMSGD) 587 COMMIT
Display Module (DSPMOD) 345 keyword 370
Display Program (DSPPGM) 345 prepared statements 201
Display Program References (DSPPGMREF) 345 statement 559
Display Service Program (DSPSRVPGM) 345 statement description 6
DLTLIB (Delete Library) 374 commitment control
DLTOVR (Delete Override) 459 activation group
DSPJOB (Display Job) 381 example 561
DSPJRN (Display Journal) 461 committable updates 567
DSPMSGD (Display Message Description) 587 description 369
Edit Check Pending Constraints (EDTCPCST) 375 displaying 381
Edit Rebuild of Access Paths (EDTRBDAP) 375 distributed connection restrictions 570
Edit Recovery for Access Paths (EDTRCYAP) 376 DRDA resource 567
EDTCPCST (Edit Check Pending Constraints) 375 INSERT statement 32
EDTRBDAP (Edit Rebuild of Access Paths) 375 job-level commitment definition 565, 570
EDTRCYAP (Edit Recovery for Access Paths) 376 protected resource 567
Grant Object Authority (GRTOBJAUT) 365 rollback required 572
GRTOBJAUT (Grant Object Authority) 365, 367 RUNSQLSTM command 362
Override Database File (OVRDBF) 62, 302, 346, SQL statement processor 362
367, 459, 461, 462 sync point manager 567
OVRDBF (Override Database File) 62, 302, 346, two-phase commit 567
367, 459, 461, 462 unprotected resource 567
Print SQL Information (PRTSQLINF) 345, 382, 393, common database problem
784 solving 551
QAQQINI 546 comparison operators 40
Reclaim DDM connections (RCLDDMCNV) 573 comparisons involving UDTs example 173, 174
Retrieve Message (RTVMSG) 587
compile step
Revoke Object Authority (RVKOBJAUT) 365
warning 343
RTVMSG (Retrieve Message) 587
compile-time option
Run SQL Statements (RUNSQLSTM) 1
COBOL 254
Index 853
correlated subquery (continued) creating (continued)
DELETE statement, use in 88 view 13
examples description 28
HAVING clause 90 on a table 29
UPDATE statement 91 over multiple tables 30
WHERE clause 89 cross join 78
note on using 92 CRTDUPOBJ (Create Duplicate Object) command 380
correlation CRTSQLC (Create SQL C) command 819
definition 85 CRTSQLCBL (Create SQL COBOL) command 661
name 23, 79 CRTSQLCBLI (Create SQL ILE/COBOL)
using subquery 85 command 678
cost estimation CRTSQLCI (Create SQL ILE C for AS/400)
query optimizer 423 command 695
cost of a UDT example 163 CRTSQLCPPI (Create SQL C++) command 712
counter for UDFs example 194 CRTSQLFTN (Create SQL FORTRAN) command 835
counting and defining UDFs example 164 CRTSQLPKG (Create SQL Package) command 765
CREATE COLLECTION statement 13 CRTSQLPKG (Create Structured Query Language
CREATE DISTINCT TYPE statement Package) command 763
and castability 157 CRTSQLPLI (Create SQL PL/I) command 728
examples of using 171 CRTSQLRPG (Create SQL RPG) command 744
to define a UDT 170 CRTSQLRPGI (Create SQL ILE/RPG) command 761
Create Duplicate Object (CRTDUPOBJ) command 380 CRTSQLxxx commands 3
CREATE FUNCTION statement 190 CRTUSRPRF command
to register a UDF 161 create user profile 366
CREATE INDEX ctr() UDF C program listing 194
sort sequence 53 CURDATE scalar function 46
CREATE SCHEMA CURRENT DATE special register 45
statement 362 current row 60
Create Source Physical File (CRTSRCPF) command CURRENT SERVER special register 45
precompile use 335 current session
Create SQL C++ (CRTSQLCPPI) command 712 printing 357
Create SQL C (CRTSQLC) command 819 removing all entries from 357
Create SQL COBOL (CRTSQLCBL) command 661 CURRENT TIME special register 45
Create SQL FORTRAN (CRTSQLFTN) command 835 CURRENT TIMESTAMP special register 45
Create SQL ILE C for AS/400 (CRTSQLCI) CURRENT TIMEZONE special register 45
command 695 cursor
Create SQL ILE COBOL (CRTSQLCBLI) distributed unit of work 576
command 678 example overview 56
Create SQL ILE/RPG (CRTSQLRPGI) command 761 example steps 58, 62
Create SQL Package (CRTSQLPKG) command 340, open 59
557, 765 open, effect of recovery on 68
authority required 558 positions
Create SQL PL/I (CRTSQLPLI) command 728 retaining across program call 467, 468
Create SQL RPG (CRTSQLRPG) command 744 rules for retaining 467
Create Structured Query Language Package using to improve performance 467, 468
(CRTSQLPKG) command 763 retrieving SELECT statement result 212
CREATE TABLE scrollable
prompting 353 positioning within a table 55
CREATE TABLE statement 14 serial
examples of using 171 positioning within a table 55
using 55
Create User Profile (CRTUSRPRF) command 366
WITH HOLD clause 68
CREATE VIEW statement 29
CURTIME scalar function 46
creating
CVTSQLCPP (Convert SQL C++) command 781
collection
example 13
index D
example 96 damage tolerance 376
structured query language package 763 dash
table in COBOL host variable 255
description 14 data
example 14 adding to the end of table 552
Index 855
debugging 551 (continued) definitions 406 (continued)
program 551 key positioning access method 396
DECLARE CURSOR statement key selection access method 404
using 36 keyed sequence 396
DECLARE statement 198 library 3
default collection name (DFTRDBCOL) parameter 3 logical file 3
default filter factors 424 miniplan 425
DEFAULT keyword NULL value 40
SET clause, value 34 null value 45
default value 14, 18, 32 open data path 388
inserting in a view 96 outer-level SELECT 84
define output source file member 11
cursor 58 package 3, 9, 12, 557
defining parallel data space scan method 403
parallel key positioning access method 411
column heading 16, 48
parallel key selection access method 405
table name 48
parallel pre-fetch access method 401
defining the UDT and UDFs example 178
physical file 3
definitions 555
predicate 38
access path 396
primary table 426
access plan 11, 344
program 11
authorization ID 3
record 3
authorization name 3
referential integrity 8
binding 344
remote unit of work 555
catalog 6
row 3, 6
collection 3, 6
search condition 38
column 3, 6
secondary tables 426
column name 39
sequential access path 396
concurrency 367
special register 40
constant 39
SQL package 3
constraint 8
SQLCODE 587
correlated subquery 88
SQLSTATE 587
correlation 85
stored procedure 8
CURRENT DATE special register 45
subquery 84
current row 60
symmetrical multiprocessing 397
CURRENT SERVER special register 45
table 3, 6
CURRENT TIME special register 45
trigger 8
CURRENT TIMESTAMP special register 45
user profile 3
CURRENT TIMEZONE special register 45
user source file member 11
data definition statement (DDL) 4
USER special register 45
data dictionary 6
view 3, 7
data manipulation statement (DML) 4
delete current row 61
dataspace 397
Delete Library (DLTLIB) command 374
default filter factors 424
Delete Override (DLTOVR) command 459
dial 426
Delete SQL Package (DLTSQLPKG) command 557,
distributed unit of work 555
783
expression 39
DELETE statement
field 3
correlated subquery, use in 92
hashing access method 415
description 28, 34
host structure 215
Delete Structured Query Language Package
host variable 39, 215
(DLTSQLPKG) command 782
implementation cost 423
index 8 deleted rows
index-from-index access method 414 getting rid of using REUSEDLT(*YES) 399
index only access method 412 getting rid of using RGZPFM 399
indicator structure 220 deleting
indicator variable 219 structured query language package 782
isolatable 437 deleting information in a table 28
join 30 department table
join operation 23 CORPDATA.DEPARTMENT 579
journal 6 DESCRIBE statement
journal receiver 6 use with dynamic SQL 201
Index 857
dynamic SQL (continued) examples 74, 50, 220 (continued)
replacing parameter markers with host AND 74, 75
variables 197 application forms using CREATE TABLE 171
run-time overhead 197 assignments in dynamic SQL 175
statements 4 assignments involving different UDTs 176
varying-list SELECT statement 201 assignments involving UDTs 175
AVG over a UDT 164
BETWEEN 72
E catalog
Edit Check Pending Constraints (EDTCPCST) getting column information 97
command 375 getting table information 97
Edit Rebuild of Access Paths (EDTRBDAP) changing information in a table 25
command 375 changing rows in table
Edit Recovery for Access Paths (EDTRCYAP) host variables 33, 34
command 376 COBOL, UPDATE statement 253
eliminating duplicate rows 81 COMMENT ON 49
embedded SQL comparisons involving UDTs 173, 174
C 227 correlated subquery
C++ 227 HAVING clause 90
COBOL 253 WHERE clause 89
FORTRAN 839 correlation name 23
ILE RPG for AS/400 311 cost of a UDT 163
PL/I 280 counter for UDFs 194
precompiling 333 counting and defining UDFs 164
RPG for AS/400 298 creating
running a program with 346 collection 13
employee-to-project activity table 581 index 96
encapsulation and UDTs 169 table 14
End Database Monitor (ENDDBMON) command 480 view on a table 29
END DECLARE SECTION statement views over multiple tables 30
C 230 ctr() UDF C program listing 194
C++ 230 CURRENT DATE 47
COBOL 255 CURRENT TIMEZONE 47
FORTRAN 841 cursor 56
ILE RPG for AS/400 312 cursor in DUW program 576
PL/I 282 database monitor 482, 485
RPG for AS/400 300 defining stored procedures
end-of-data with CREATE PROCEDURE 117
reached 59 defining the UDT and UDFs 178
ENDDBMON (end database monitor) command 480 deleting information in a table 28
entering DBCS data 353 determining connection status 576
ERRLVL 362 distributed RUW program 556
error distributed unit of work program 574
data mapping dynamic CALL 127
ORDER BY 37 embedded CALL 124, 125
error determination EXISTS 87
in distributed relational database exploiting LOB function to populate the
first failure data capture (FFDC) 578 database 179
error message during a compile 343 exploiting LOB locators to manipulate UDT
C++ program 343 instances 180
C program 343 exploiting UDFs to query instances of UDTs 179
COBOL program 343, 344 exponentiation and defining UDFs 161
PL/I program 343 extracting a document to a file (CLOB elements in a
RPG program 343, 344 table) 152
error message during precompile function invocations 165
displayed on listing 335 getting catalog information about
error return code, handling column 98
table 97
general 221
getting comment 50
establishing
getting information about
position at end of table 551
column using catalog 97
examples 49, 50, 220
Index 859
FETCH FORTRAN program (continued)
using host structure array margin 841
multiple-row 63 naming convention 840
FETCH statement 212 PROCESS statement 841
multiple-row SQL data types
ILE RPG for AS/400 314, 323 determining equivalent FORTRAN 842
RPG for AS/400 301 SQLCA, declaring 837
FFDC (first failure data capture) 578 SQLCOD, declaring 837
field 3 SQLCODE, declaring 837
file SQLSTA, declaring 837
query options 546 SQLSTATE, declaring 837
file description statement label 840
external WHENEVER statement 841
C 245 FROM clause 36
C++ 245 function
C for AS/400 803 interactive SQL 349
COBOL 272 function invocations example 165
ILE RPG for AS/400 316 function-name, passing to UDF 189
PL/I 291 function path and UDFs 158
RPG for AS/400 302 function references, summary for UDFs 167
host structure arrays function selection algorithm and UDFs 158
COBOL 273 functions
ILE RPG for AS/400 317 aggregating functions 159
RPG for AS/400 303 column functions 159
file reference variable scalar functions 159
LOB syntax for referring to 165
COBOL 262, 316 table functions 159
file reference variables
examples of using 152
for manipulating LOBs 146 G
input values 151 generic query information
LOB summary record 520
PL/I 286 getting
output values 152 catalog information about
filter factors, default column 98
in query optimization 424 table 97
first failure data capture (FFDC) 578 comment 50
fixed-list SELECT statement information
definition 201 from multiple table 23
using 201 from single table 20
flexibility and UDTs 169 governor 391
floating point host variable *DFT 393
COBOL 257 *RQD 393
FOR UPDATE OF clause *SYSRPYL 393
restrictions 58 CHGQRYA 391
format, SQLDA 204 JOB 392
FORTRAN program QRYTIMLMT 391
BEGIN/END DECLARE SECTION 841 time limit 392
coding SQL statements 837, 845 Grant Object Authority (GRTOBJAUT) command 365
comment 839 GRANT PACKAGE statement 555
compile-time options 841 graphic host variable
continuation 839 C 233
debug lines 839 C++ 233
dynamic SQL coding 838 COBOL 259
host variable 841 ILE RPG for AS/400 318
character 842 GROUP BY
declaring 841, 842 clause 40
numeric 841 keyword 553
IMPLICIT statement 841 using null value with 41
including code 840 grouping optimization 442
indicator variable 844 grouping the row you select 41
Index 861
ILE RPG for AS/400 program improving performance 467, 465 (continued)
/COPY statement 312, 316 using
character host variables 314 close SQL cursor (CLOSQLCSR) 467, 468
coding SQL statements 309, 325 FETCH FOR n ROWS 462
comment 311 INSERT n ROWS 463
compiler parameters 341 parameter passing techniques 472
continuation 311 precompile options 471
dynamic SQL coding 310 IN keyword
error and warning message during a compile 344 description 73
external file description 316 subquery, use in 87
host structure in tray
declaring 314 table 585
host structure array IN_TRAY table 585
declaring 314 include file
host variable 312 C 228
character 318 C++ 228
date/time 313, 318 CCSID 334
declaring 313 COBOL 254
externally described 316 ILE RPG for AS/400 312
graphic 318 input to precompiler 334
numeric 318 PL/I 281
including code 312 RPG for AS/400 299
indicator structure 322 INCLUDE statement 334
indicator variable 322 C 228
naming convention 312 C++ 228
notes and usage 322 COBOL 254
occurrence data structure 314 ILE RPG for AS/400 312
sequence numbers 312 PL/I 281
SQL data types RPG for AS/400 299
determining equivalent RPG 318 including code
SQL statements in C 228
sample 634 C++ 228
SQLCA 309 COBOL 254
SQLCA placement 309 COBOL COPY statement 254
SQLDA FORTRAN 840
example 323 ILE RPG for AS/400 312
SQLDA, declaring 310 PL/I 281
statement label 312 RPG for AS/400 299
variable declaration 322 index
WHENEVER statement 312 columns used for keys 396
ILE RPG program creating
SQLCA placement 605 from another index 414
ILE service programs definition 8
package 559 recovery 376
using 96
immediate sensitivity 63, 67
using effectively, example 446
implementing a UDF 160 working with 96
implicit connect 565 index advisor
implicit disconnect 565 query optimizer 482
IMPLICIT statement index created
FORTRAN 841 summary record 505
improving performance 464, 465 index-from-index
blocking, using 461 access method 414
join queries 439 index only access method 412
paging interactively displayed data 463 indexes
PREPARE statement 470 using with sort sequence 449
retaining cursor positions across program call 467, indicator array
468 C 241, 244
SELECT statements, using effectively 464 C++ 241, 244
selecting data from multiple tables 442 COBOL 268, 272
SQL blocking 462 PL/I 288, 290
Index 863
key range estimate 424 LOBs (Large Objects)
key selection and DB2 object extensions 145
access method 404 file reference variables 146
keyed sequence examples of using 152
access path 396 input values 151
keyword output values 152
AND 74 SQL_FILE_APPEND, output value option 152
BETWEEN 72 SQL_FILE_CREATE, output value option 152
COMMIT 370 SQL_FILE_OVERWRITE, output value
DISTINCT 553 option 152
EXISTS 87 SQL_FILE_READ, input value option 152
GROUP BY 553 large object descriptor 146
IN 73, 87 large object value 146
LIKE 73 locators 146, 147
NOT 40 example of using 148
OR 74 indicator variables 151
search condition, use in 72 manipulating 145
UNION 80, 553 programming options for values 147
UNION ALL, specifying 83 storing 145
synergy with UDTs and UDFs
examples of complex applications 177
L locator
LABEL ON statement 16, 48 LOB
information in catalog 48 COBOL 262, 316
package 560 locators
language, host LOB
concepts and rules 215 PL/I 285
large object descriptor 146 locators for manipulating LOBs 146
large object value 146 locks
learn how to analyzing 381
prompt logical file 3, 7
using interactive SQL 354 logical file DDS
leaving interactive SQL 357 database monitor 493
left outer join 76 long object names
library performance 470
definition 3
LONG VARCHAR
LIKE keyword 73
storage limits 146
limit, time 392
LONG VARGRAPHIC
linking a UDF 160
storage limits 146
list function 356
list function in interactive SQL Loosely Coupled Parallelism 2
description 354 LR indicator
listing ending RPG for AS/400 programs 307
output from precompiler 335
live data
using to improve performance 464 M
LOB file reference variable manipulating large objects 145
COBOL 262, 316 mapping error
LOB file reference variables data 37
PL/I 286 margins
LOB host variable C 229
COBOL 261, 315 C++ 229
PL/I 284 COBOL 254
LOB locator FORTRAN 840
COBOL 262, 316 PL/I 281
LOB locators REXX 329
PL/I 285 MARGINS parameter
LOBEVAL.SQB COBOL program listing 153 C 229
LOBEVAL.SQC C program listing 153 C++ 229
LOBLOC.SQB COBOL program listing 150 marker, parameter 213
LOBLOC.SQC C program listing 149 maximum size for large object columns, defining 147
Index 865
object-relational override consideration
application domain and object-orientation 145 running a program with embedded SQL 346
constraint mechanisms 145 Override Database File (OVRDBF) command 62, 346,
data types 145 367, 459, 462
definition 145 used with RPG for AS/400 /COPY 302
LOBs 145 overview, interactive SQL 349
support for 146
triggers 145
UDTs and UDFs 145
P
package
why use the DB2 object extensions 145
authority to create 557
occurrence data structure
authority to run 557
ILE RPG for AS/400 314
bind to an application 9
RPG for AS/400 301
CCSID considerations for 561
ODBC 199
consistency token 560
ODP (open data path) 459
Create SQL Package (CRTSQLPKG)
ODP implementation and host variable
command 557
summary record 518 authority required 558
open creating
closing 459 authority required 557
determing number 461 effect of ARD programs 577
effect on performance 459 errors during 558
reducing number 459 on local system 560
open cursor RDB parameter 557
during a unit of work 68 RDBCNNMTH parameter 560
open data path 459 TGTRLS parameter 559
definition 388 type of connection 560
information messages 388 unit of work boundary 560
open database connectivity (ODBC) 199 creating on a non-DB2 UDB for AS/400
OPEN statement 213 errors during 558
operation, atomic 373 required precompiler options for DB2 Common
operators, comparison 40 Server 558
OPNQRYF (Open Query File) command 477 unsupported precompiler options 558
optimization 395 DB2 UDB for AS/400 support 557
grouping 442 definition 9, 12, 557
join 426 Delete SQL Package (DLTSQLPKG) command 557
join order 432 deleting 557
nested loop join 426 interactive SQL 360
OPTIMIZE FOR n ROWS clause labeling 560
effect on query optimizer 423 restore 560
optimizer save 560
SQL statement size 559
operation 422
statements that do not require package 559
query index advisor 482
page fault 397
optimizer timed out
paging
summary record 515
interactively displayed data 463
options, precompile
retrieved data 551
improving performance by using 471 parallel data space scan
ORDER BY access method 403
clause 43 parallel key positioning access method 411
using null values with 44 parallel key selection access method 405
data mapping errors 37 parallel pre-fetch
sort sequence, using 50 access method 401
using 51 parallel pre-load
outer join 76 index-based 413
outer-level SELECT 84 table-based 413
output parallel processing
all queries that performed table scans 484 controlling
SQL queries that performed table scans 483 in jobs (CHGQRYA command) 474
output source file member system wide (QQRYDEGREE) value 474
definition 11 parameter markers
overloaded function names and UDFs 158 in functions example 166
Index 867
precompiler (continued) precompiler parameter (continued)
include file OPTION(*NOGEN) 316, 341
CCSID 334 OPTION(*NOSEQSRC) 312
input to 334 OPTION(*SEQSRC) 299
other preprocessors 334 OPTION(*QUOTE) 254
output from OPTION(*SEQSRC) 312
listing 335 OPTION(*SOURCE) 334
sample 336 OPTION(*XREF) 334, 335
temporary source file member 335 OUTPUT 334
parameters passed to compiler 340 parameters passed to compiler 340
passing PGM 335
host variables 472 PRTFILE 335
record number 337 RDB
reference column 339 Effect on precompile 333
secondary input 334 TIMFMT 313, 317
sequence number 337 TIMSEP 313, 317
source file predicate
CCSID 334 definition 38
containing DBCS constants 334 transitive closure 436
margins 334 Predictive Query Governor 391
source record 337 PREPARE statement
VisualAge C++ for OS/400 342 improving performance 470
warning 343 non-SELECT statement 200
precompiler command restrictions 199
CRTSQLC 819 using 213
CRTSQLCBL 340 prepared statement
CRTSQLCBLI 341 distributed unit of work 576
CRTSQLCI 229, 232, 234, 341 preparing program with SQL statements 333
CRTSQLCPPI 229, 232, 234, 341 preprocessor
CRTSQLFTN 835 usage with SQL C++ program 229
CRTSQLPLI 281, 340 usage with SQL C program 229
CRTSQLRPG 340 with SQL 334
CRTSQLRPGI 341 preventing duplicate rows 71
CRTSQLxxx 51, 558 Print SQL Information (PRTSQLINF) 345, 382, 393
CVTSQLCPP 229, 232, 234, 341 printer file 335
default 468 CCSID 335
description 340 printing current session 357
precompiler file problem handling 221
QSQLTEMP 335 problems
QSQLTEMP1 335 join query performance 438
precompiler parameter problems, solving database 551
*CVTDT 316 process, basic
*NOCVTDT 316, 317 precompiler 333
ALWCPYDTA 464 PROCESS statement
CLOSQLCSR 469 COBOL 254
DATFMT 313, 317 FORTRAN 841
DATSEP 313, 317 processing
DBGVIEW(*SOURCE) 380 data in a view 36
displayed on listing 335 non-SELECT statements 199
INCFILE 334 SELECT statement with SQLDA 201
MARGINS 281, 334, 343 producing reports from sample programs 643
C 229 program
C++ 229 application 379
OBJ 335 compiling application
OBJTYPE(*MODULE) 341 ILE 341
OBJTYPE(*PGM) 341 non-ILE 340
OBJTYPE(*SRVPGM) 341 debugging 380
OPTION(*APOST) 254 definition 11
OPTION(*CNULRQD) 232, 234 Integrated Language Environment (ILE) object 11
OPTION(*CVTDT) 316 non-ILE object 11
OPTION(*NOCNULRQD) 232, 234 performance verification 381
Index 869
removing all entries from current session 357 RPG for AS/400 program 299 (continued)
Reorganize Physical File Member (RGZPFM) command continuation 299
effect on variable-length columns 458 dynamic SQL coding 298
getting rid of deleted rows 399 ending
report produced by sample programs 643 using LR indicator 307
resource using RETRN statement 307
optimization 395 error and warning message during a compile 344
restriction external file description 302
FOR UPDATE OF 553 host structure
result table 80 array, declaring 301
resume using CREATE DISTINCT TYPE example 171 declaring 300
retaining cursor positions host variable 300
across program call character 303
improving performance 467, 468 declaring 300
all program calls externally described 302
rules 469 numeric 303
Retrieve Message (RTVMSG) command 587 including code 299
retrieving indicator structure 306
data indicator variable 306
from a table. 20 naming convention 299
in reverse order 551 occurrence data structure 301
row sequence numbers 299
using a cursor 60 SQL data types
SELECT statement result determining equivalent RPG 303
cursor, using 212 SQL statements in
RETRN statement sample 628
ending RPG for AS/400 programs 307 SQLCA
return code 38 placement 297
handling in statement label 300
general 221 structure parameter passing 307
running a program with embedded SQL 347 using the SQLDA 298
RETURNS TABLE clause 188, 190, 192 WHENEVER statement 300
reuse deleted records RRN scalar function 77
INSERT 33 rule
Revoke Object Authority (RVKOBJAUT) command 365 host variable, using 218
REVOKE PACKAGE statement 555 retaining cursor positions
REXX 2 program calls 469
coding SQL statements 325, 332 rule 216, 218
SQL statements in SQL with host language, using 215
sample 640 rules that govern operations on large objects 145
ROLLBACK run mode
prepared statements 201 interactive SQL 353
rollback Run SQL Statements (RUNSQLSTM) command 1
rollback required state 572 run-time support
ROLLBACK statement 559 concepts 1
row running
definition 3, 6 dynamic SQL application 199
delete current 61 program with embedded SQL
inserting multiple DDM consideration 346
into a table 69 instruction 346
note 70 override consideration 346
preventing duplicate 71 return code 347
ROWS, INSERT n programs 346
RUNSQLSTM (Run SQL Statements) 357, 358
improving performance 463
command 1, 361
RPG 297, 309
command errors 362
RPG for AS/400 program 309
commitment control 362
/COPY statement 299, 302
RUNSQLSTM (Run SQL Statements) command 794
character host variables 300
RUW (remote unit of work) 555
coding SQL statements 297, 307
comment 299 S
compiler parameters 340 sales using CREATE TABLE example 171
Index 871
source file (continued) SQL statement processor
member, temporary commitment control 362
output from precompiler 335 example
member, user 11 QSYSPRT listing 363
multiple source in COBOL 255 schemas 362
saving a session in 357, 358 using 361
temporary for precompile 335 SQLCA (SQL communication area)
sourced UDF 174 C 225
special register C++ 225
CURRENT DATE 45 COBOL 251
CURRENT SERVER 45 FORTRAN 837
CURRENT TIME 45 ILE RPG for AS/400 309
CURRENT TIMESTAMP 45 PL/I 279
CURRENT TIMEZONE 45 REXX 325
definition 40 RPG for AS/400 297
SET clause, value 33 SQLCOD
USER 45 FORTRAN 837
specific-name, passing to UDF 189 SQLCODE
specifying C 225
column, SELECT INTO statement 38 C++ 225
UNION ALL 83 COBOL 251
SQL 1 FORTRAN 837
call level interface 2 in REXX 325
introduction 1 PL/I 279
object 5 SQLCODEs
statements definition 221, 587
COBOL 613 description 589
ILE COBOL 613 negative 591
ILE RPG for AS/400 program 634 positive 589
PL/I 606, 621 testing application program 380
REXX 640 SQLD 204
RPG for AS/400 628 SQLD field of SQLDA
types 4 in REXX 326
using host variable 215
SQLDA (SQL descriptor area)
using with host language, concepts and rules 215
allocating storage for 208
SQL-argument, passing to UDF 190, 191
C 226
SQL-argument 188
C++ 226
SQL-argument-ind, passing to UDF 188 COBOL 252
SQL-argument-ind-array, passing to UDF 191 format 204
SQL blocking FORTRAN 838
improving performance 462 ILE RPG for AS/400 310
SQL data types PL/I 280
determining equivalent processing SELECT statement 201
C 246 programming language, use in 203
C++ 246 REXX 325
COBOL 274 RPG for AS/400 298
FORTRAN 842 SELECT statement for allocating storage for
ILE RPG for AS/400 318 SQLDA 208
PL/I 292 SQLDABC 204
REXX 330 SQLDAID 204
RPG for AS/400 303
SQLDATA 206
SQL_FILE_READ, input value option 152
SQLDATA field of SQLDA
SQL information
in REXX 327
summary record 494, 526
SQLERRD field of SQLCA 325
SQL naming convention 4
SQLERRD(3) field of SQLCA
SQL package 3 determining connection status 570
SQL-result, passing to UDF 190, 192 determining number of rows fetched 63
SQL-result 188 SQLERRD(4) field of SQLCA 570
SQL-result-ind, passing to UDF 188, 192 determining connection type 567
SQL-state, passing to UDF 188 determining length of each row retrieved 63
Index 873
statements 216, 221, 606, 613, 621, 628, 634, 640 subquery processing
(continued) summary record 517
restriction 48 subselect
specifying column 38 combining with the UNION keyword, example 80
SET CONNECTION 555 SET clause, value 34
SQL packages 558 summary records
testing access plan rebuilt 513
in application program 379 arrival sequence 498
using interactive SQL 349, 358 generic query information 520
time value 47 host variable and ODP implementation 518
timestamp value 47 index created 505
UPDATE optimizer timed out 515
assignment operation 216 query sort 508
changing data value 25 SQL information 494, 526
example 33 STRDBMON/ENDDBMON commands 522
WHENEVER 230, 255, 282, 841 subquery processing 517
handling exception condition 222 table locked 511
ILE RPG for AS/400 312 temporary file 509
RPG for AS/400 300 using existing index 502, 530, 531, 533, 535, 536,
WHENEVER SQLERROR 221 537, 538, 539
stopping interactive SQL 357 Symmetric Multiprocessing 2
storage, allocating for SQLDA 208 symmetrical multiprocessing 397
stored procedures 117, 144 sync point manager 567
definition 8 syntax check
parameter passing 128 QSQCHKS 2
indicator variables 132 syntax check mode
table 128 interactive SQL 353
storing large objects 145 syntax for referring to functions 165
STRDBMON (Start Database Monitor) command 479 system naming convention 3
STRDBMON/ENDDBMON commands system table name 17
summary record 522
string assignment
rule using host variable 217
T
table
string search and defining UDFs example 162
adding data to the end 552
string search on BLOBs 162
changing definition 93, 554
string search over UDT example 163
changing information in 25
strong typing and UDTs 172
CL_SCHED (class schedule) 585
STRSQL (Start SQL) command 350, 801
CORPDATA.DEPARTMENT (department) 579
structure parameter passing 473 CORPDATA.EMP_ACT (employee to project
PL/I 294 activity) 581
RPG for AS/400 307 CORPDATA.EMPLOYEE 580
Structured Query Language 1 CORPDATA.PROJECT (project) 584
structured query language package creating
creating 763 CREATE TABLE statement 14
deleting 782 view 29
subfields data management methods 420
ILE RPG for AS/400 314 DB2 UDB for AS/400 sample 579
RPG for AS/400 300 defining name 48
subquery 88 definition 3, 6
basic comparison 86 deleting information in 28
correlated 85, 88 establishing position at the end 551
correlated names and references 91 getting catalog information
definition 84 about column 97
examples 84 getting information
EXISTS keyword 87 from multiple 23
IN keyword 87 from one 20
notes on using IN_TRAY 585
with UPDATE and DELETE 88 inserting
prompting 353 information into 18
quantified comparison 86 multiple rows into 69
search condition 85 joining 75
Index 875
UDTs (User-defined types) (continued) using (continued)
synergy with UDFs and LOBs parameter markers 462
examples of complex applications 177 parameter passing techniques
why use UDTs 169 performance improvement 472
union record selection 52
C 230 sort sequence 50
C++ 230 time value 47
UNION ALL, specifying 83 timestamp value 47
UNION keyword Using
restriction 553 views 95
using to combine subselects 80 using a locator to work with a CLOB value
unique constraint example 148
definition 8 using existing index
unit of work summary record 502, 530, 531, 533, 535, 536, 537,
distributed 555 538, 539
effect on open cursor 68 using interactive SQL 349
package creation 560 after first time 356
remote 555 list selection function 354
rollback required 572 prompting 351
unit of work boundary statement entry 351
package creation 560 using JOB parameter 394
unprotected resource 567 using qualified function reference example 166
unqualified function reference example 167 using SQL
unqualified reference 158 application programs 395
UPDATE statement
assignment operation 216
correlated subquery, using in 91 V
description 33 validate mode
WHERE clause 25 interactive SQL 353
updating data value
as it is retrieved, restrictions 552 default 14, 18
committable updates 567 inserting
previously retrieved 553 into table or view 31
use of UDTs in UNION example 177 VALUES clause 31
user auxiliary storage pool (ASP) 378 variable 230, 249
user-defined sourced functions on UDTs example 175 host
user profile REXX 330
authorization ID 3 indicator 219
authorization name 3 use of indicator with host structure, example 220
user source file member used to set null value 220
definition 11 variable-length data
USER special register 45 tips 456
using varying-list SELECT statement
a copy of the data 464, 465 definition 202
allow copy data (ALWCPYDTA) 464, 465 using 202
blocked insert statement 70 verification
USING performance 381
clause 210 view
using creating 95
close SQL cursor (CLOSQLCSR) 464, 469 CREATE VIEW statement 28
cursor on a table 29
example 56 over multiple tables 30
retrieve row 60 definition 3, 7
date value 47 limiting access 28
USING processing data in 36
DESCRIPTOR clause 213 read-only 96
using security 366
FETCH statement 462 sort sequence 53
index 96 testing 379
null value 45 using 95
ORDER BY 51 WITH CASCADED CHECK 108
W
warning
test for negative SQLCODEs 221
warning message during a compile 343
C++ program 343
C program 343
COBOL program 343, 344
PL/I program 343
RPG program 343, 344
WHENEVER NOT FOUND clause 60
WHENEVER SQLERROR 221
WHENEVER statement
C 230
C++ 230
COBOL 255
FORTRAN 841
handling exception condition with 222
ILE RPG for AS/400 312
PL/I 282
REXX, substitute for 329
RPG for AS/400 300
WHERE clause
character string 31
constant 39
description 38
example 213
expression in, using 39
joining tables 76
multiple search condition within a 74
NOT keyword 40
WHERE CURRENT OF clause 61
WITH CASCADED CHECK OPTION 108
WITH CHECK OPTION 108
WITH DATA DICTIONARY clause
CREATE COLLECTION statement 6
CREATE SCHEMA statement 6
creating data dictionary 6
WITH LOCAL CHECK OPTION 109
working with
index 96
X
X/Open call level interface 2
Index 877
878 DB2 UDB for AS/400 SQL Programming V4R4
Readers’ Comments — We’d Like to Hear from You
AS/400e
DB2 UDB for AS/400 SQL Programming
Version 4
Overall, how satisfied are you with the information in this book?
How satisfied are you that the information in this book is:
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any way
it believes appropriate without incurring any obligation to you.
Name Address
Company or Organization
Phone No.
___________________________________________________________________________________________________
Readers’ Comments — We’d Like to Hear from You Cut or Fold
RBAF-Y000-00 IBMR Along Line
_ _ _ _ _ _ _Fold
_ _ _ and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ do
_ _ not
_ _ _staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
IBM CORPORATION
ATTN DEPT 542 IDCLERK
3605 HWY 52 N
ROCHESTER MN 55901-7829
________________________________________________________________________________________
Fold and Tape Please do not staple Fold and Tape
Cut or Fold
RBAF-Y000-00 Along Line
IBMR
Printed in U.S.A.
RBAF-Y000-00