Complete Mainframe
Complete Mainframe
Complete Mainframe
Page:1
ISPF &
TSO
Page:2
ISPF
ISPF (Interactive Spool Productivity Facility) is a popular editor in the mainframe
platform. There are two kinds of commands that can be entered in edit/view panel of
ISPF editor.
They are:
1. Primary Commands that can be entered in the Command Line.
2. Line Commands that can be entered in the line(s).
Mastering in these commands will be highly helpful in analyzing/developing
application programs.
Primary Command
KEYS
PFSHOW
RESET
X ALL
COLUMNS/COLS / COLS
ON/COL
TOP/BOT
HEX
SWAP
SWAP LIST
CUT
CUT DISPLAY
CUT APPEND
PASTE
SCRNAME XXXXX
LOCATE linenumber/Label
RECOVERY ON
RECOVERY OFF
RECOVERY OFF
UNWARN
UNDO
Meaning
Displays the PF Keys and its definition for that panel
Shows PF Keys definition at the bottom of panel. To OFF
this option enter PHSHOW OFF.
Clears all message lines and reset the display of the
dataset. RESET LABEL is required to clear labels.
Exclude all the lines
Display a ruler at the top of the screen. This can be
removed by COLS OFF or RESET.
Move to the TOP or BOTTOM of the file.
Displays the data in Hexadecimal mode. Type HEX OFF to
come back to normal display. Highly helpful for reading
computational fields and low-values.
Switch between split screens.
Lists the available screens.
With C or CC line command, this will copy the lines to
temporary storage.
CUT .A .B will cut the lines in between the labels A & B.
Display the lines already cut.
Appended the lines to the lines already cut.
Paste the already cut lines. Line commands A or B should
be used before issuing paste to indicate after or before
which line this paste should occur.
The current screen is named as XXXXX. If you are using
multiple split screens, then you can come back to this
screen by SWAP XXXXX line command. Its good
convention to name the screens with the purpose so that
you can easily come back. By default the screens are
named with numbers.
Locates the line-number or the label. Labels can be
established in any line by the line command
.labelname.
With RECOVERY on, UNDO line command will revert the
last recent change made in the edit/view session. REC
OFF will off the recovery option and UNDO Unavailable
message will appear in the first line. REC OFF UNWARN is
same as REC OFF but there wont be any UNDO
F Musa
F Musa 1
F Musa 1 30
F Musa ALL
Page:3
F Musa FIRST/LAST
F Musa NEXT/PREV
F Musa CHARS
F Musa WORD
F Musa PREFIX/SUFFIX
F Musa X
F * NEXT
C ALL Musa Muthu
Page:4
Special String Characters that can be used with FIND and CHANGE:
Char
P=
Meaning
Displays any character
Char
P.
P#
P@
P-
P<
P>
P$
Xnn
Meaning
Displays any
character
Displays any
Displays any
alphabetic
Displays any
non-displayable
non-numeric
lower-case
special character.
Help
SPLIT - Split the screen at the cursors location
Exit
Exit or RETURN
RFIND - Find next occurrence of the last F command
RCHANGE Change next occurrence of the last C command
UP
DOWN
SWAP
LEFT
RIGHT
RETRIEVE - repeats the previous command line command
Page:5
LINE Commands:
Command
C
M
A/B
Rn
UC
LC
D
I
X
S
)n
(n
TS n
TE
Meaning
Copy this line. C 10 copies 10
lines. This will be followed by the
line commands A or B.
Move this line. M 10 copies
move 10 lines. This will be
followed by the line commands A
or B.
After(A)/Before(B) this line,
place the copied/moved line(s).
A10 repeats the copied/moved
lines 10 times.
Repeats the current line n times.
Just R repeat 1 time.
Change this line content to
uppercase
Change this line content to
lower-case
Deletes the line. D n deletes n
lines from this line.
Insert 1 line. After entering the
data in the first line, next line
will be inserted.
Useful for line by line data entry
ComMand
CC
Meaning
MM
RR
UCC
LCC
DD
I 10
XX
))n
F/L
((n
TF n
BNDS
Page:6
MASK
command.
O
(over
lay)
C 0300 *
O 0400
MVSQuest
Result:
0300 *
0400 *
MVSQuest
Command Syntax
HELP [COMMAND]
SEND
LISTCAT
LISTDS
RENAME
Purpose
Displays the purpose and syntax of the command.
The source for this information is retrieved from SYS1.HELP.
Ex: HELP ALLOCATE
It is used to communicate with other users.
SEND REFRESHER COMPLETED USER(SMSXL86) LOGON
Message hi is sent to user SMSXL86. If the user is not logged
on, the message should be displayed the next time user logs
on. LOGON is an optional parameter.
Message: 115 chars maximum
It is used to list entries in MVS catalog. The syntax and the
available options are explained in VSAM - IDCAMS section.
LISTCAT ENTRIESSMSXL86.TEST.SOURCE ALL
It is used get the information about one or more datasets.
LISTDS dataset-name MEMBERS|HISTORY|STATUS|LEVEL
MEMBERS list all the members of a PDS. This command is
useful in REXX to process all the members of a PDS.
It is used to rename a dataset. Generic datasets are allowed.
RENAME BPAMAIN.TEST.* BPMAIN.UNIT.* renames all the
Page:7
DELETE
ALLOCATE
(Existing dataset)
ALLOCATE
(New dataset)
FREE
ISRDDN
CALL
SUBMIT
CANCEL
STATUS
ALTER
TSO Commands can be executed in batch (JCL) using terminal monitor program
IKJEFT01.
//RUNBATCH EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
RENAME MVSQUEST.EMP.DATA LEADSOFT.EMP.DATA
RENAME MVSQUEST.SALES.DATA LEADSOFT.EMP.DATA
/*
Page:8
The above step renames two datasets from MVSQUEST qualifier to LEADSOFT
qualifier. Any high volume manual job can be completed in matter of minutes if you
have good knowledge in ISPF and TSO commands with little exposure to REXX.
Page:9
Page:10
JCL
Any business application is divided into logical modules and these modules are
developed using programming languages. These programs should be executed in a
pre-defined sequence to achieve the business functionality.
JCL (Job Control Language) is used to DEFINE and CONTROL the JOB to the
operating system.
Definition involves definition of the programs need to be executed, the data for the
programs and the sequence of programs. CONTROL involves controlling the
execution or bypassing of a program in the sequence based on the result of the prior
program execution.
JCL Coding Sheet
123
NAME
11
OPERATION
16
OPERANDS
73
80
JCL statements should have // in column 1 and 2. STAR (*) in the third column,
indicates that the line is a comment line.
NAME is optional field. If coded, it should start at column3 and can have maximum 8
characters. The first character should be an alphabet or national character (@, # or
$). Remaining characters can be any alphanumeric or national characters.
OPERATION follows NAME field. There should be at least one space between NAME
and OPERATION. If NAME is not coded then OPERATION can start at fourth column
itself. Typical OPERATION keywords are JOB, EXEC and DD.
OPERANDS are the parameters for the operation. OPERANDS follow OPERATION and
there should be at least one space between them. A comma separates parameters
and there should not any space between parameters. If the OPERANDS are more,
then they can be continued in the next line. To continue the current line, end the
current line before column 72 with , and start the next line anywhere between
columns 4-16. Columns 1-3 should be // .
COMMENT FIELD Comment field optionally follows OPERAND FIELD, preceded by at
least one blank.
End of Job is identified by NULL statement. NULL statement has // in column 1 and 2
with no NAME, OPERATION or OPERAND fields. The statements coded after NULL
statement will not be processed.
DELIMITER Some times we pass the data in the JCL itself. This is called in-stream
data. The starting of data is identified by * in the operand field of DD operation.
DELIMITER indicates the end of data. /* in column 1 and 2 is the default delimiter.
Page:11
JES3
Centralized Environment. There is a
global processor that controls all the
other processors and assigns the jobs to
them.
Datasets are allocated before the job
execution.
JCL Statements
JOB. It should be the first statement in the JCL. It indicates accounting information
and JOB related information to the system. If the member being submitted contains
multiple job cards, then multiple jobs will be submitted. These jobs will run
concurrently or one after other based on job name, class and initiator availability.
EXEC. The name of the program or procedure to be executed is coded here. Every
EXEC statement in a JOB identifies one step. Maximum of 255 EXEC statements can
be coded in a JOB.
DD.
Data Descriptor. The dataset details are coded here. Dataset contains the data
that need to be processed by the program or data that is produced by the program.
Maximum 3273 DD statements can be coded in a step.
Abnormal End (ABEND) & ERROR
Page:12
User ABEND(Unnnn): When some unexpected condition occurs in the data passed,
the program will call an abend routine and abend the step with proper displays. This
is thrown by application based on the requirement.
JOB Statement
Sample Syntax:
//JOBNAME JOB (ACCOUNTING INFO), (PROGRAMMER NAME),
// TIME=(MINUTES,SECONDS), CLASS=A,MSGCLASS=A,PRTY=14,ADDR=VIRT,
// REGION=nK,MSGLEVEL=(A,B),COND=(N,OPERATOR), TYPRUN=SCAN
JOBNAME
It identifies name of the job. The job is identified in the JES SPOOL using this
name. Naming rules are already mentioned in the coding sheet section.
ACCOUNTING INFO (Mandatory. Installation Dependant.)
1. Resource usage charges are charged to the account mentioned here.
2. If you dont know your installation account, you cannot submit the job. It is like
when you dont have account, you cannot withdraw cash in bank.
3. Maximum 142 characters can be coded as accounting information.
PROGRAMMER NAME
Page:13
It is used to specify whether the job will run in the Real storage or Virtual
storage.
Syntax: ADDRSPC={REAL|VIRT}
REAL Allocation is done in REAL storage and the program is not page-able.
VIRT Allocation is done in VIRTUAL storage and the program is page-able.
REGION
RESTART parameter allows restarting from any particular step in the job.
Syntax: RESTART = Step-name in the job
RESTART = * means restart from the beginning.
To restart from any procedure steps, code RESTART=PROCSTEP.STEPNAME
Whereas PROCSTEP=name of the JCL step that invoked the PROC &
STEPNAME=name of the proc step where you want execution to start.
RESTART ignores any condition in the step being restarted and it can also be step
that is in the ELSE part of the IF..ELSE..ENDIF.
TYPRUN
Page:14
TIME
It defines the maximum allowable CPU time for the JOB. The parameter can be coded
at EXEC card also. On EXEC, it defines CPU limit of step.
Syntax: TIME = (MINUTES, SECONDS), MINUTES <= 1440 and SECONDS < 60
TIME=NOLIMIT/1440/MAXIMUM means the job can use CPU for unlimited time
TIME=0 will produce unpredictable results.
If TIME is coded on both JOB as well as EXEC, then EXEC Time limit or the
time left out in the job Time limit whichever is smaller will be the time permitted
for the step to complete.
If a JOB runs more than allowed time, then it will ABEND with system ABEND
code S322. If there is no TIME parameter, then the CPU time limit pre-defined with
CLASS Parameter will be effective.
NOTIFY
TSO User-id to whom the job END / ABEND / ERROR status should be
notified. NOTIFY=&SYSUID will send the notification to the user who submitted the
job.
COND
1. It is used for conditional execution of JOB based on return code of JOB steps.
2. The return code of every step is checked against the condition coded on JOB
card. If the condition is found TRUE, then all the steps following it are bypassed.
3. Maximum eight conditions can be coded in the COND parameter. In case of
multiple conditions, if ANY of the condition is found TRUE then the JOB stops
proceeding further.
Syntax:
COND=(CODE,OPERATOR,STEPNAME)
STEPNAME is optional. If you code it, then that particular step-name return
code is checked against the CODE with the OPERATOR. If omitted, then the return
codes of all the steps are checked. On comparison, if the condition found to be true,
then all the following steps are bypassed.
CODE can be 0-4095
OPERATOR can be GT, LT, GE, LE, EQ
It can be coded on EXEC statement. STEP level control is popular then JOB
level control. On EXEC statement, you may find ONLY, EVEN keywords against COND
parameter.
COND=ONLY allows the step execution only if any prior step is ABENDED.
COND=EVEN allows the step execution independent of any prior ABENDS.
Consider the COND parameter coded on EXEC statement,
Ex: //STEP2 EXEC PGM=PGM3,COND=((16,GE),(90,LE,STEP1),ONLY)
Step gets executed only if
A preceding step abnormally terminated OR
The return codes from all preceding steps are 17 or greater OR
The return code from STEP1 is 89 or less.
EXEC Statement
Page:15
STEPNAME
It is an OPTIONAL field but it is needed if you want to restart a job from this
step and for the same reason, it should be unique within the job.
PGM or PROC
PRTY assigns priority to a job and DPRTY assigns dispatching priority to job
step. Syntax: DPRTY=(value1, value2). Value1 and value2 can be 0-15. D-Priority is
calculated using the formula (value1*16 + value)
IF /THEN/ELSE/END-IF
Page:16
CATLG
DELETE
PASS
KEEP
UNCATLG
Page:17
Abnormal-termination-status:
PASS is not allowed. Meaning of CATLG, UNCATLG, KEEP, DELETE are same as
normal-termination status.
Absence of any parameter should be mentioned with , as they are positional
parameters. Ex: DISP=(,,CATLG). If the dataset is a new dataset and DISP is not
coded, then the default in effect would be DISP=(NEW,DELETE,DELETE).
DCB (Data Control Block)
DCB specifies attributes of the records in the dataset.
Syntax
DCB=(LRECL=NN,BLKSIZE=YY,RECFM=Z,DSORG=MM,BUFNO=nn)
RECFM
Page:18
RECFM and RESERVE clause specifies BUFNO. DSORG can be assumed from the
name of the dataset and the directory space allocation of SPACE parameter.
2. Usually for an existing dataset, we dont have to code DCB parameters. It will be
available in the dataset label. The dataset label is STORED in the VTOC (DASD) or
along with dataset (TAPE) during the dataset creation.
LABEL
Syntax: LABEL = (Dataset-sequence-number
,label-type
,PASSWORD | NOPWREAD
,IN | OUT
,RETPD=nnn |EXPDT = (yyddd|yyyy/ddd))
Dataset Sequence number - identifies the relative position of a dataset on a
tape/cart volume. Should be 1 through 4 decimal digits. Omit this parameter if
access is being made to the first dataset on the tape volume.
Label - indicates the label type of the tape/cart volume.
SL - indicates that a dataset has IBM standard labels. Default value.
NL - indicates that a tape dataset has no labels.
NSL - indicates that a tape dataset has nonstandard labels.
SUL - indicates that a tape dataset has both IBM standard and user labels.
BLP - requests that the system bypass label processing for a tape dataset.
PASSWORD - indicates that a dataset cannot be read, changed, deleted or written to
unless the correct password is specified.
NOPWREAD - indicates that a dataset cannot be changed, deleted or written, unless
the correct password is specified. No password is necessary for reading the dataset.
IN - indicates that a dataset opened for I/O is to be read only.
OUT - indicates that a dataset opened for I/O is to be written only.
RETPD / EXPDT - indicates the retention period and the expiration date for a
dataset.
Ex: LABEL=EXPDT=04121 (Dataset expires on 121st day of 2004)
LABEL=RETPD=200 (Dataset is retained for 200 days)
Page:19
SPACE
It is used to request space for the new dataset. It is mandatory for all the
NEW datasets.
SPACE = ({TRK | CYL | blklgth} (,Primary-qty , Second-qty, Directory)
[,RLSE] [,CONTIG] [,MXIG] [,ROUND])
TRK/CYL - Requests that space be allocated in tracks or cylinders.
Blklgth - Specifies the average block length, in bytes, of data. Specify a decimal
number from 1 through 65535. This takes precedence, when specified, together with
the BLKSIZE field of DCB parameter.
Primary-qty - Specifies the amount of primary space required in terms of the space
unit (tracks/cylinders/number of data blocks). One volume must have enough space
for the primary quantity. If a particular volume is requested and it does not have
enough space available for the request, the job step is terminated.
Second-qty - Specifies the number of additional tracks, cylinders, blocks to be
allocated, if additional space is required.
Directory - Specifies the number of 256-byte records needed in the directory of a
PDS. (In every block we can store 5-6 members)
RLSE - requests that space allocated to an output dataset, but not used, is to be
released when the dataset is closed. Release occurs only if dataset is open for output
and the last operation was a write.
CONTIG - requests that space allocated to the dataset must be contiguous. It affects
only primary space allocation.
MIXIG It is used to specify that space requested should be allocated to the largest
contiguous area of space available on the volume. It affects only primary allocation.
ROUND - When the first parameter specifies the average block length, this parameter
requests that allocated space must be equal to an integral number of cylinders. Else
ignored.
Page:20
Extents
Page:21
UNIT
DEFER
- Asks the system to assign the dataset to the device but requests that
the volume(s) not be mounted until the dataset is opened. DEFER is ignored for a
new dataset on direct access.
AFF=ddname - Requests that system to allocate different datasets residing on
different removable volumes to the same device during step execution. The ddname
is that of an earlier DD statement in the same step. It reduces number of devices
used in a job step.
Ex: UNIT=(TAPE,,DEFER) UNIT=AFF=DD1
VOLUME
Page:22
All the parameters of JOB, DD and EXEC statements can be broadly classified
into two types. They are POSITIONAL and KEYWORD parameters.
Parameter that has its meaning defined by its position is positional parameter.
Bypassing of any positional parameter has to be informed to system by ,. Ex:
accounting information and programmer name of Job card.
Keyword parameters follow positional parameter and they can be coded in
any order. Ex: All the parameters suffixed by = are keyword parameters. PGM= and
PROC= are exceptions for this rule. They are positional parameters.
In-stream data
The data passed in the JCL stream along with JCL statements is called instream data.
Syntax: //SYSIN DD &&&&
&&&&
*
DATA
DATA,
DLM=@@
Meaning
The data follows from the next line and ends when any // or /* appears
at column 1& 2. So // and /* cannot be passed to the program.
//EMPFILE DD *
2052MUTHU
1099DEV
/*
The data follows from the next line and ends when any /* appears at
column 1 & 2. So /* cannot be passed to the program.
//SYSUT1 DD DATA
//STEP1 EXEC PGM=INV1040
//INVLSTA DD SYSOUT=A
//INVLSTB DD SYSOUT=A
/*
The data follows from the next line and ends when the characters coded
in DLM appears at column 1 & 2.
//SYSIN DD DATA,DLM=##
//EMPFILE DD *
2052MUTHU
1099DEV
/*
Page:23
##
OTHER Statements
OUTLIM
It limits the number of print lines. The limit ranges from 1 to 16777215. The
job is terminated
if the limit is reached.
//name DD SYSOUT=*,OUTLIM=3000
If the program tries to write 3001st line, JOB will ABEND with S722.
DEST
Specify the type of forms on which the SYSOUT datasets should be printed.
It is 1-8 alphanumeric or national character. SYSOUT DD FORMS parameter overrides
OUTPUT PARMS parameter.
//name OUTPUT FORMS=form-name
FREE
The datasets are allocated just before the execution of step and de-allocated
after the execution of step. FREE parameter de-allocates the file as soon as the file is
closed. //ddname DD SYSOUT=X,FREE=CLOSE
INCLUDE
Page:24
amendments need only be made in one place. But it makes JCL unnecessarily
fragmented or difficult to read/maintain in a live environment.
//
INCLUDE MEMBER1
MEMBER1 should exist in the procedure library. Procedure libraries are coded using
JCLLIB statement. Include must not be used to execute a PROC. It is possible to nest
up to 15 levels of INCLUDE statements.
Concatenation Rules
Concatenation allows naming of more than one dataset in a single input file
without physically combining them:
//STEPLIB DD DSN=PROD.LIBRARY,DISP=SHR
//
DD DSN=TEST.LIBRARY,DISP=SHR
//
DD DSN=USR.LIBRARY,DISP=SHR
In this case Prod, Test & User libraries are concatenated.
1.16 PDS or 255 Sequential datasets can be concatenated.
2.LRECL and Record format should be same.
3.If the Block size is different, then largest block size dataset should be first.
4.Datasets may reside on different devices and device types.
REFERBACK
Page:25
The backward reference or refer back permits you to obtain information from
a previous JCL statement in the job stream. STAR (*) is the refer-back operator.
It improves consistency and makes the coding easier.
DCB, DSN, VOL=SER, OUTPUT, PGM can be referred-back.
Refer back can be done using the following ways:
1.Another DD of the same step will be referred.
*.DDNAME
2.DD of another step can be referred
*.STEPNAME.DDNAME (DDNAME of the STEPNAME)
3.DD of another proc step can be referred.
*.STEP-INVOKING-PROC.PROC-STEP-NAME.DDNAME
STAR in the SYSOUT parameter refers back to MSGCLASS of JOB card.
Refer-back example:
//STEP1 EXEC PGM=TRANS
//TRANFILE DD DSNAME=AR.TRANS.FILE,DISP=(NEW,KEEP),
//
UNIT=SYSDA,VOL=SER=MPS800,
//
SPACE=(CYL,(5,1)),
//
DCB=(DSORG=PS,RECFM=FB,LRECL=80)
//TRANERR DD DSNAME=AR.TRANS.ERR,DISP=(NEW,KEEP),
//
UNIT=SYSDA,VOL=SER=MPS801,
//
SPACE=(CYL,(2,1)),
//
DCB=*.TRANFILE
//STEP2 EXEC PGM=TRANSEP
//TRANIN DD DSNAME=*.STEP1.TRANFILE,DISP=SHR
//TRANOUT DD DSNAME=AR.TRANS.A.FILE,DISP=(NEW,KEEP),
//
UNIT=SYSDA,VOL=REF=*.STEP1.TRANFILE,
//
SPACE=(CYL,(5,1)),
//
DCB=*.STEP1.TRANFILE
.
//STEP5 EXEC PGM=*.STEP3.LOADMOD
Page:26
Special DD names
STEPLIB
It follows EXEC statement. Load modules will be checked first in this library
and then in the system libraries. If it is not found in both places, then the JOB would
ABEND with S806 code.
JOBLIB
It follows the job statement. Load modules of any steps (EXEC) that dont
have respective STEPLIB will be looked into this PDS. If not found, it will be checked
against system libraries. If it is not found there also, then the JOB would ABEND with
S806.
JCLLIB
In case of ABEND, one of the following three datasets will be useful. If more
than one of the three datasets is coded, then the last coded DD will be effective.
SYSUDUMP
Prints the program area, contents of registers, and gives a trace back of
subroutines called. It will be in hexadecimal format.
SYSABEND
Same as SYSUDUMP, but also prints the system nucleus. Don't use unless you
need the nucleus. It will be in hexadecimal format.
SYSMDUMP
Same information as SYSABEND, but dump will be in machine language.
Used to store dumps in a data set to be processed by an application program.
JOBCAT and STEPCAT
The datasets used in step are first checked in the STEPCAT (ICF or VSAM
Catalog) before checking in system catalog. If no STEPCAT in the step and there is a
JOBCAT, then the datasets are first searched in JOBCAT before checking in system
catalog.
SYSIN
Page:27
Procedures
Set of Job control statements that are frequently used are defined separately
as a procedure and it can be invoked as many times as we need from the job.
The use of procedures helps in minimizing duplication of code and probability of
error.
If a procedure is defined in the same job stream, then it is called In-stream
procedure. They are coded before the first EXEC statement in the job. The definition
starts with PROC statement and ends with PEND. Instead procedures can be saved in
a PDS and invoked from job and they are called as catalogued procedures. One
procedure can call other. This is called nesting and nesting is possible up to 15 levels.
In-stream Procedure
//JOB1 JOB
//PROC1 EXEC PROC
//STEP1 EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD
DISP=SHR,DSN=MT0012.MSG(TEST)
// PEND
//STEP01 EXEC PROC1
Catalogued Procedure
//JOB1 JOB
// JCLLIB ORDER=(MT0012.PROC.PDS)
//STEP01 EXEC PROC1
MT0012.PROC.PDS(PROC1)
//STEP1 EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD
DISP=SHR,DSN=MT0012.MSG(TEST)
/*
Procedure Modification
COND, TIME and PARM values of an EXEC statement in the procedure can be
added/modified/nullified in the invoking JCL in the following way.
//STEP1 EXEC PROC-NAME, PARAMETER-NAME.STEPNAME-IN-PROC=NEW-VALUE
Ex: PROC COBCLG has a statement
//COB EXEC PGM=IGYCRCTL,REGION=400K
// EXEC COBCLG, REGION.COB=1M
// EXEC COBCLG, TIME.COB=(0,10)
// EXEC COBCLG, REGION.COB=
Page:28
Other Rules:
1. Multiple overrides are allowed but they should follow the order. That is first you
should override the parameters of step1, then step2 and then step3. Any overrides in
the wrong order are IGNORED.
2. If the STEPNAME is not coded during override, then the system applies the
override to first step alone.
//EXEC COBCLG,REGION=512K
Procedure Modification- DD Statements
A symbolic is a PROC placeholder. The value for the symbolic is supplied when
the PROC is invoked. (&symbol=value). If the value is not provided during invoke,
then the default value coded in the PROC definition would be used for substitution.
Ex: If you want to override UNIT Parameter value of all the DD statements, define
this as symbolic parameter in proc.
Catalog Procedure: PROC1
//PROC1 PROC,UNIT=SYSDA
//S1
EXEC PGM=TEST1
//DD1 DD UNIT=&UNIT
//DD2 DD UNIT=&UNIT
//STEP1 EXEC PROC1,UNIT=TEMPDA will set &UNIT as TEMPDA for this run of
procedure.
Statements Not Allowed in a Procedure
You can place most statements in a procedure, but there are a few
exceptions. Some of these exceptions are:
1. The JOB statement and JES2/JES3 Control statements.
2. The JOBCAT and JOBLIB statement.
3. An instream procedure (an internal PROC/PEND pair)
4. SYSIN DD *, DATA statements
Nested Procedures-Add/Override/Nullification is applicable at only one level. In other
words, if PROCA calls PROCB and PROCB calls PROCC, then no statement in PROCC
can be overridden from PROCA invocation. Only PROCB can do that.
Page:29
Procedure Example
SMSXL86.TEST.PROCLIB(EMPPROC)
//EMPPROC PROC CLASS='*',SPACE='1,1'
Default values defined for CLASS
//STEP1A EXEC PGM=EMPPGM
and SPACE symbolic parameters.
//SYSOUT
DD SYSOUT=&CLASS
//EMPMAST DD DSN=&HLQ..EMPLOYEE.EDS,DISP=SHR
//
DD DSN=&HLQ..EMPLOYEE.IMR,DISP=SHR
//
DD DSN=&HLQ..EMPLOYEE.VZ,DISP=SHR
//EMPOUT
DD DSN=&&INVSEL,DISP=(NEW,PASS), INVSEL is temporary
//
UNIT=SYSDA,SPACE=(CYL,(&SPACE))
dataset
//EMPCNTL DD DUMMY
//* EMPCNTL is a control card and any in-stream data can be coded during the
//* invoke.
//*
//INV3020 EXEC PGM=EMPRPT
//SYSOUT DD SYSOUT=&CLASS
//INVMAST DD DSNAME=&&INVSEL,DISP=(OLD,DELETE)
//INVSLST DD SYSOUT=&CLASS
SMSXL86.TEST.JCLLIB(EMPJCL)
//EMPJCLA JOB (1000,200),CLASS=A,MSGCLASS=Q,NOTIFY=&SYSUID
//PROCLIB JCLLIB ORDER=(SMSXL86.TEST.PROCLIB)
// SET SPACE=1,1
Value is given for symbolic parameter SPACE.
//*STEP1A PARM is added and value for symbolic parameter HLQ is supplied.
//STEP01 EXEC EMPPROC,PARM.STEP1A=02/11/1979,HLQ=PROD
//STEP1A.EMPMAST DD
//
DD DSN=PROD.EMPLOYEE.CTS,DISP=SHR
//*Instead of PROD.EMPLOYEE.IMR, PROD.EMPLOYEE.TCS dataset is used whereas
//*other two datasets PROD.EMPLOYEE.EDS and PROD.EMPLOYEE.VZ retains their
//*position in concatenation.
//STEP1A.EMPOUT DD UNIT=TEMPDA
//*UNIT parameter of EMPOUT file is modified
//STEP1A.EMPCNTL DD *
DESIG=SSE
/*
//*EMPCNTL control card value is passed.
//STEP1A.EMPOUT2 DD DSN=PROD.EMPLOYEE.CONCAT,
//
DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,
//
SPACE=(CYL,(10,10))
//*EMPOUT2 file is added to the step STEP1A.
In the above example, CLASS retains the default value coded on the PROC definition
Statement (CLASS='*').
Page:30
IEBCOPY
Meaning
Function is COPY
Specifies the members to be copied/replaced
Syntax: (NAME-IN-OUTPUT,NAME-IN-OUTPUT,REPLACE-IF-EXISTS)
Specifies the members to be excluded from copy
Displays the copied members in the SYSPRINT.
Points to input dataset
Points to output dataset. Should exist on the same line of COPY.
//SYSIN DD *
COPY OUTDD=OUTPUT INDD=(INPUT01,(INPUT02,R),LIST=NO)
/*
It says DD statements INPUT01 and INPUT02 are input files. OUTPUT is the
output file. Note the 'R' in (INPUT02,R). It instructs to IEBCOPY that like named
members are to be replaced. LIST=NO indicates that the names of the members
copied need not be listed in the SYSPRINT dataset.
IEBCOPY-CONTROL CARD FOR SELECTIVE COPY/REPLACE
COPY OUTDD=OUTPUT,INDD=INPUT01
SELECT MEMBER=((MEM1,NEWNAME,R),(MEM2,,R))
MEM1 is copied as NEWMEM in OUTPUT. If already NEWMEM exist, it will be replaced.
IEBCOPY-CONTROL CARD FOR OMITTING SELECTED MEMBERS
COPY OUTDD=OUTPUT,INDD=INPUT01
EXCLUDE MEMBER=(MEM1,MEM2)
All the members except MEM1 and MEM2 are copied into OUTPUT from INPUT01.
IEBCOPY-Complete step for Compressing PDS
Page:31
IEBGENER
RECORD
FIELD
Meaning
First Statement which sets the values for MAXNAME,MAXGPS,
MAXLITS, MAXFLDS
Maximum MEMBER statements that can follow.(During member
generation)
Syntax: MAXNAME=3
Maximum IDENT statement that can follow. (During member
generation)
Maximum FILED statements that can follow. (During reformatting)
Syntax: MAXFLDS=10
Maximum size of literal during reformatting.
It identifies the name of the member to be created.
Syntax: MEMBER NAME=MEM1
It usually follows MEMBER statement to identify the last record to be
copied from the input dataset.
RECORD IDENT= (Length,Literal,Start-Column)
Example: RECORD IDENT=(3,MVS,1), then the last record to be
copied into the member from the input dataset, has MVS in column
1-3.
It is used for reformatting the records in the input file.
RECORD FIELD=(Length, literal or input column, conversion, output
column)
Output column says where the field should be placed in the output
file. Conversion can be ZP or PZ. PZ means the input packed decimal
field is being converted into zoned format and ZP is the reverse.
IEBGENER- SYSIN CARD FOR CREATING THREE MEMBERS FROM INPUT PS FILE
//SYSIN DD *
GENERATE MAXNAME=3,MAXGPS=2
MEMBER NAME= MEMB1
RECORD IDENT=(8,'11111111'.1)
MEMBER NAME=MEMB2
RECORD IDENT=(8,'22222222',1)
MEMBER NAME=MEMB3
//
IEBGENER creates three members. It reads input file writes into memb1 until it finds
11111111 in column 1. In the same way it reads and writes the records into memb2
Page:32
until it finds 22222222 in column 1. The remaining records in the input dataset are
copied into MEMB3.
IEBGENER- SYSIN CARD FOR REFORMATTING DURING COPY
//SYSIN
DD *
GENERATE MAXFLDS=5,MAXLITS=4
RECORD FIELD=(5,1,,1),FIELD=(20,21,,6),FIELD=(9,61,ZP,26),
FIELD=(9,70,ZP,31),FIELD=(4,'TEST',,36)
/*
Input Column
Values in column 1-5
Values in column 21-40
Values in column 61-9
Values in 70-9
Any Conversion
Output column
Copied into column 1-5
Copied into column 6-25
Packed value is written in
26-30
Packed value is written in
31-35
TEST literal is written in
column 36-39
IEHLIST
It is used to list
1. The entries in the catalog. (SYSIN KEYWORD- LISTCTLG)
2. Directory(s) of 1-10 PDS (SYSIN KEYWORD- LISTPDS)
3. The entries in VTOC.
(SYSIN KEYWORD-LISTVTOC)
Code SYSIN, SYSPRINT and one more DD that will mount the volume queried in
SYSIN.
The following JOB lists the VTOC of INTB01 in a formatted way.
//MYJOB JOB CLASS=A,MSGCLASS=A,REGION=256K,MSGLEVEL=(1,1)
//LISTVTOC EXEC PGM=IEHLIST
//SYSPRINT DD SYSOUT=*
//VOLDD
DD UNIT=SYSDA,VOL=SER=INTB01,DISP=OLD
//SYSIN
DD *
LISTVTOC FORMAT,VOL=3330=INTB01
/*
To list the contents of any PDS:
LISTPDS DSNAME=(SYS1.LINKLIB), VOL=SER=INTB01.
To list the catalog for a specific DASD volume:
LISTCTLG VOL=3350=PUB000
IEHMOVE
Page:33
/*
FROM clause in the SYSIN is not needed for catalogued datasets. It is suggested to
allocate the Supervisor Call Library.
IEBCOMPR
IEBEDIT:
One typical interview question is how to run the selected steps. For example, how to
execute step4 and step9 of 10 steps JCL. The typical answer is to restart the job
from step4 and include a ALWAYS TRUE condition (like COND=(0,LE) or
COND=(4096,GT)) in steps 5,6,7,8 and 10. If the interviewer said COND should not
used, then only way is IEBEDIT.
//M665235C JOB (MVSQuest),'IEBEDIT TEST',
//
CLASS=B,MSGCLASS=X,NOTIFY=V665235,REGION=28M
//*
//SUBMIT EXEC PGM=IEBEDIT
//SYSUT1 DD DSN=TEST.MUTHU.JCL(JCLINP),DISP=SHR
//SYSUT2 DD SYSOUT=(*,INTRDR)
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
EDIT START=M665235C,TYPE=INCLUDE,STEPNAME=(STEP0004,STEP0009)
//*
In the above JCL, JCLINP is the 10 steps JCL. M665235C is the job-name in the JCL.
If TYPE is exclude, then the mentioned steps will not be copied/submitted.
DFSORT
If you do a global search of your JCL inventory, you will find the program that
is used very frequently is SORT. There are two famous SORT products are available in
the market. One is DFSORT and the other is SYNCSORT. The basic commands in both
the products are same.
Page:34
ICETOOL provides a lot more than what SORT can offer and it comes with
DFSORT product. SYNCTOOL comes with SYNCSORT product. PGM=SORT can point
to DFSORT or SYNCSORT. It is actually an alias to SORT product in your installation.
DFSORT is IBM product and it needs the following datasets for its operation.
SORTIN (Input dataset), SORTOUT (Output dataset), SYSIN (Control Card) and
SYSOUT (Message dataset).
Message dataset can be altered using MSGDDN= parameter of SYSIN.
SORT Card to copy all the records from SORTIN to SORTOUT
SORT FIELDS=COPY.
SORT card to skip first 100 records and then copy 20 records
SORT FIELDS=(STARTPOS,LENGTH,TYPE,ASC|DESC)
Type = CH (Character), BI (Binary), ZD (Zoned Decimal), PD(Packed Decimal),
FS (Signed numbers)
Ex: SORT FIELDS=(1,10,CH,A,15,2,CH,A)
SORTS all the SORTIN records with 1-10th column as major key and 15-16th column
as minor key before writing to SORTOUT.
SORT card to select the records meeting the criteria
INCLUDE COND=(STARTPOS,LENGTH,TYPE,RO,VALUE)
RO-Relational operator can be EQ,NE,LT,GT,LE,GE.
Card to select the records with TRICHY in the column 4-9
INCLUDE COND= (4,6,CH,EQ,CTRICHY)
Card to select the records which has same values in 2-3 and 5-6
INCLUDE COND= (2,2,CH,EQ,5,2,CH)
SORT card to reject the records meeting the criteria
OMIT COND=(STARTPOS,LENGTH,TYPE,RO,VALUE)
Card to reject the records with TRICHY in the column 4-9
OMIT COND= (4,6,CH,EQ,CTRICHY)
Card to reject the records which has same values in 2-3 and 5-6
OMIT COND= (2,2,CH,EQ,5,2,CH)
SORT card to change PD to ZD
If input file has a PD field S9(5)V99 Comp-3 and to reformat as PIC S9(5).9(2) then
use,
OUTREC FIELDS=(1,5,PD,EDIT(STTTTT.TT),SIGNS=(,-,,))
SORT card to remove the duplicates
SORT FIELDS= (1,5,CH,A),EQUALS
SUM FIELDS=NONE.
SORTIN records are sorted on the key 1-5 and if more than one record is
found to have same key, then only one record will be written to SORTOUT. If EQUALS
is coded, then the record to be written is the FIRST record else it can be anything.
SORT card to sum the values for same-key records
Page:35
SORTIN records are sorted on key 1-5 and if more than one record is found to
have same key, then the records are summed on column 10-14 and one record is
written with total sum.
SORT card to add sequence number to the output file
OUTREC=(1,20,SEQNUM,4,ZD) 4 digit zoned decimal sequence number is
added with all the records of input file in column 21-24
This will be useful for reading a file from bottom to top. This will be useful for
matching logic in JCL. Matching logic in JCL will be explained later.
SORT card to restructure the input file before feeding to sort
INREC FIELDS=(37,2,6,6,40,4,31,2)
The length of the output file is 14.
SORT card to create multiple files from single input file (Maximum 32 files)
OUTFIL FILES=1 INCLUDE=(1,6,CH,EQ,CMUMBAI)
OUTFIL FILES=2 INCLUDE=(1,6,CH,EQ,CTRICHY)
Code output files as SORTOF1 and SORTOF2.
SORT card to restructure the sorted
OUTREC FIELDS=(1:1,20,
21:CMUTHU,
26:10Z,
36:21,10)
MERGE FIELDS=(STARTPOS,LENGTH,TYPE,ASC|DESC,STARTPOS,)
128 such Keys can be given. Datasets to be merged are coded in SORTIN00 to
SORTIN99.
SORT CARD to extract all the PROCS executed in a JCL
OPTION COPY
INCLUDE FORMAT=SS,COND=(1,81,EQ,C'EXEC',AND,1,81,NE,C'PGM=)
ICETOOL
DD statements in ICETOOL:
TOOLMSG
FOR
ICETOOL MESSAGES
DFSMSG
FOR
SORT MESSAGES
TOOLIN
FOR
ICETOOL-CONTROL-CARD
XXXXCNTL
FOR
SORT-CONTROL-CARD USED BY ICETOOL
XXXX is coded in USING clause of TOOLIN.
TOOLIN card to copy
Page:36
Page:37
//DFSMSG DD SYSOUT=*
//IN1
DD *
1234567890
3456789012
5678901234
//IN2
DD *
3456789012
7890123456
8901234567
//T1
DD DSN=&T1,SPACE=(CYL,(5,5),RLSE),DISP=(,PASS)
//T2
DD DSN=&T2,SPACE=(CYL,(5,5),RLSE),DISP=(,PASS)
//INT
DD DSN=*.T1,DISP=(OLD,PASS),VOL=REF=*.T1
//
DD DSN=*.T2,DISP=(OLD,PASS),VOL=REF=*.T2
//FILEA
DD SYSOUT=*
//FILEB
DD SYSOUT=*
//OUT
DD SYSOUT=*
//TOOLIN DD *
SORT FROM(IN1) USING(CTL1)
SORT FROM(IN2) USING(CTL2)
SORT FROM(INT) USING(CTL3)
//CTL1CNTL DD *
SORT FIELDS=(1,10,CH,A)
OUTFIL FNAMES=T1,OUTREC=(1,80,C'1')
//CTL2CNTL DD *
SORT FIELDS=(1,10,CH,A)
OUTFIL FNAMES=T2,OUTREC=(1,80,C'2')
//CTL3CNTL DD *
SORT FIELDS=(1,10,CH,A)
SUM FIELDS=(81,1,ZD)
OUTFIL FNAMES=OUT,INCLUDE=(81,1,ZD,EQ,3),OUTREC=(1,80)
OUTFIL FNAMES=FILEA,INCLUDE=(81,1,CH,EQ,C'1'),OUTREC=(1,80)
OUTFIL FNAMES=FILEB,INCLUDE=(81,1,CH,EQ,C'2'),OUTREC=(1,80)
/*
Explanation:
CTL1 Add 1 to all the records of the first file at 80th column
CTL2 Add 2 to all the records of the second file at 80th column
CTL3 Concatenate both files and sort the file on key if duplicates found, sum on
81st column. So if any record exists in both the file, it will have 3 after summing.
So now extract records with 1 , 2 and 3 into three files. While writing the
records, remove the 81st byte added for our temporary purpose.
1 Records only in first file
2 Records only in second file.
3 Records exist in both the files.
IEHPROGM
It is used to
1.Catalog a dataset (CATLG DSNAME=A.B.C, VOL=SER=nnnn)
2.Uncatalog a dataset (UNCATLG DSNAME=A.B.C)
3.Rename a dataset (RENAME DSNAME=A.B.C,VOL=SER=nnnn,NEWNAME=D.E.F)
4.Create an index for GDG (BLDG INDEX=gdg-name, LIMIT=n, [,EMPTY][,DELETE])
Page:38
1. GDG datasets are referred in the JCL using GDG base and relative number. So the
same JCL can be used again and again without changing the dataset name and this
is the biggest advantage of GDG.
2.GDG Base has pointers to all its generations. When you want to read all the
transactions done till today, you can easily do it by reading the GDG base if it is
available. Otherwise you have to concatenate all the transaction files before reading.
Creation of GDG
1.GDG Base is created using IDCAMS. The parameters given while creating the GDG
are:
Page:39
Parameter
NAME
LIMIT
EMPTY/NOEMPTY
SCRATCH/
NOSCRATCH
OWNER
FOR DAYS (n) /
TO (DATE)
Purpose
Base of the GDG is given here.
The maximum number of GDG version that can exist at any point
of time. It is a number and should be less than 256.
When the LIMIT is exceeded,
EMPTY keeps ONLY the most recent generation.
NOEMPTY keeps the LIMIT number of newest generation.
SCRATCH un-catalogue and deletes the versions that are not kept.
NOSCRATCH just does un-cataloguing and it is not physically
deleted from the volume.
Owner of the GDG.
Expiry date. Can be coded either in the unit of days or till
particular date.
2. Model dataset is defined after or along with the creation of base. Once model DCB
is defined, then during the allocation of new versions, we no need to code DCB
parameter. Model DCB parameter can be overridden by coding new parameter while
creating the GDG version. It is worth to note that two GDG version can exist in two
different formats.
A step that defines a GDG and allocates a model DSCB
//GDG
EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//MODEL DD DSNAME=MM01.PAYROLL.MASTER,DISP=(,KEEP),
//
UNIT=SYSDA,VOL=SER=MPS800,SPACE=(TRK,(0)),
//
DCB=(DSORG=PS,RECFM=FB,LRECL=400)
//SYSIN DD *
DEFINE GDG ( NAME(MM01.PAYROLL.MASTER) LIMIT(5)
NOEMPTY
SCRATCH )
/*
How to Solve ABENDS?
There are two kinds of abends- USER and SYSTEM.
USER (Unnnn)
Prefixed with U
Application Driven The application
program issues the user abend by calling
installation specific abend routine.
IBM Supplied abend routine: ILBOABN0
MOVE 999 TO ABEND-CODE
CALL ILBOABN0 USING ABEND-CODE
This code Will abend the program with
U0999.
SYSTEM (Snnnn)
Prefixed with S
System Driven When the system is not
able to perform a statement, it abended
with respective system abend.
Page:40
Page:41
01 WS-VARIABLES.
05 WS-EMP-NAME
05 WS-EMP-AGE
05 WS-EMP-CITY
05 WS-EMP-SAL
05 WS-EMP-BONUS
05 WS-EMP-CTC
PIC
PIC
PIC
PIC
PIC
PIC
X(10).
9(02).
X(10).
S9(08).
S9(08).
S9(08).
*
PROCEDURE DIVISION.
*
MOVE 'MUTHU'
TO WS-EMP-NAME
MOVE 29
TO WS-EMP-AGE
MOVE 'TRICHY'
TO WS-EMP-CITY
MOVE 60000
TO WS-EMP-SAL
COMPUTE WS-EMP-CTC = (WS-EMP-SAL * 12)
DISPLAY 'SANSOFT COMPLETED'
DISPLAY 'EMPLOYEE DETAIL:' WS-EMP-NAME
WS-EMP-AGE
WS-EMP-CITY
WS-EMP-SAL
WS-EMP-CTC
STOP RUN.
+ WS-EMP-BONUS
','
','
','
','
The instruction at the Offset 36A is failed. So look into the compilation listing for the
statement that is in the offset 36A.
Page:42
000021 MOVE
000358 D207
000022 COMPUTE
00035E F247
000364 D20F
00036A FA54
000370 940F
000374 F844
00037A F374
000023 DISPLAY
2016 A0BD
MVC
22(8,2),189(10)
(BLW=0)+22
PGMLIT AT +185
D0F8
D0E8
D0F2
D0F3
D0F3
2026
PACK
MVC
AP
NI
ZAP
UNPK
248(5,13),30(8,2)
232(16,13),141(10)
242(6,13),248(5,13)
243(13),X'0F'
243(5,13),243(5,13)
38(8,2),243(5,13)
TS2=0
TS1=0
TS1=10
TS1=11
TS1=11
WS-EMP-CTC
WS-EMP-BONUS
PGMLIT AT +137
TS2=0
201E
A08D
D0F8
D0F3
D0F3
TS1=11
TS1=11
MOVE 60000
TO WS-EMP-SAL
COMPUTE WS-EMP-CTC = (WS-EMP-SAL * 12) + WS-EMP-BONUS
DISPLAY 'SANSOFT COMPLETED'
So one of this field referred in this statement has junk in it. Just before compute we
populated WS-EMP-SAL and so there is a problem with WS-EMP-BONUS. If you go
thru the code, you will find the developer missed to populate/initialize WS-EMPBONUS and that has caused data exception.
If these fields are from file, we cannot easily confirm like above. So we have to give
display for these two fields in the program and rerun the program or look for junks in
the source file for these two fields using FILE AID/ INSYNC. The other approach will
be look into data division map in the compilation listing.
0Source
Hierarchy and
Base
Hex-Displacement
Asmblr Data
Data Def
LineID
Data Name
Locator
Blk
Structure
Definition
Data Type
Attributes
2 PROGRAM-ID SANSOFT---------------------------------------------------------*
8
1 WS-VARIABLES. . . . . . BLW=00000 000
DS 0CL46
Group
9
2 WS-EMP-NAME . . . . . BLW=00000 000
0 000 000
DS 10C
Display
10
2 WS-EMP-AGE. . . . . . BLW=00000 00A
0 000 00A
DS 2C
Disp-Num
11
2 WS-EMP-CITY . . . . . BLW=00000 00C
0 000 00C
DS 10C
Display
12
2 WS-EMP-SAL. . . . . . BLW=00000 016
0 000 016
DS 8C
Disp-Num
13
2 WS-EMP-BONUS. . . . . BLW=00000 01E
0 000 01E
DS 8C
Disp-Num
14
2 WS-EMP-CTC. . . . . . BLW=00000 026
0 000 026
DS 8C
Disp-Num
|MUTHU
29TRICHY
0006000...|
|................................|
Page:43
Program Unit
CEEHDSP
SANSOFT
Statement
PU Addr
057780F0
27900E10
PU Offset
+00003C34
+0000036A
Load Mod
Service
Status
CEEPLPKA
SANSOFT
UK00165
Call
Exception
Entry
CEEHDSP
SANSOFT
E Addr
057780F0
27900E10
E Offset
+00003C34
+0000036A
Page:44
To load the complete log (JOB PRODBKUP of generation 4941) into a dataset
named as LOADDD, use the following card:
/LOAD DDNAME=LOADDD ID=PRODBKUP GEN=4941
To get run-date, run-time, return code and generation of all the prior runs of
a job, use the following card. The result will be stored in the dataset named as
REPORT.
/LIST ID=JOBNAME
Page:45
Page:46
Page:47
12
COL-A
72
80
COLUMN-B
1.6
Sequence number Should be in sequence but need not be consecutiveUsually (1-3) columns identify page number and (4-6) columns identify line
number.
7
Continuity (-), Comment (*), Starting a new page (/) and Debugging lines (D)
8-11 Column A Division, Section, Paragraph, 01, 77 declarations must begin here.
12.72 Column B All the other declarations/statements must begin here.
73-80 Identification field. It will be ignored by the compiler but visible in the source
listing.
Language Structure.
Character
Word
Clause
Statement
Sentence
Paragraph
Page:48
Section
Division
Program
Divisions in COBOL.
There are four divisions in a COBOL program and the data division is an optional one.
1.Identification Division.
2.Environment Division.
3.Data Division.
4.Procedure Division.
Identification Division.
This is the first division and the program is identified here. Paragraph PROGRAM-ID
followed by user-defined name is mandatory. Though 30 characters can be entered
for the program ID, compiler will consider only the first EIGHT characters and the
remaining characters will be ignored. All other paragraphs are optional and used for
documentation.
IDENTIFICATION DIVISION.
PROGRAM-ID.
PROGRAM NAME.
AUTHOR.
COMMENT ENTRY.
INSTALLATION.
COMMENT ENTRY.
DATE-WRITTEN.
COMMENT ENTRY.
DATE-COMPILED.
COMMENT ENTRY.
SECURITY.
COMMENT ENTRY.
Security does not pertain to the operating system security, but the
information that is passed to the user of the program about the security features of
the program.
Environment Division.
This is the only machine dependant division of COBOL program. It supplies
information about the hardware or computer equipment to be used on the program.
When a program is moved from one computer to another computer, the only section
that may need to be changed is ENVIRONMENT division.
Configuration Section.
It supplies information about the computer on which the program will be compiled
(SOURCE-COMPUTER) and executed (OBJECT-COMPUTER). It consists of three
paragraphs SOURCE COMPUTER, OBJECT-COMPUTER and SPECIAL-NAMES.
This is OPTIONAL section from COBOL 85.
SOURCE-COMPUTER. IBM-4381 (Computer and model # supplied by manufacturer)
WITH DEBUGGING MODE clause specifies that the debugging lines
in the program (statements coded with D in column 7) should also be compiled and
included in load module.
OBJECT-COMPUTER. IBM-4381
Page:49
Input-Output Section.
It contains information regarding the files to be used in the program and
consists of two paragraphs FILE-CONTROL & I-O CONTROL.
FILE CONTROL. Files used in the program are identified in this paragraph.
I-O CONTROL. It specifies when check points to be taken and storage areas that are
shared by different files.
Data Division.
Data division is used to define the
It has three sections.
FILE SECTION
WORKING-STORAGE SECTION
LINKAGE SECTION
66 level.
77 level.
88 level.
Page:50
Non Numeric
Exclusive sets
DBCS (Double Byte Character Set) is used in the applications that support large
character sets. 16 bits are used for one character. Ex: Japanese language
applications.
VALUE Clause
It is used for initializing data items in the working storage section. Value of item
must not exceed picture size. It cannot be specified for the items whose size is
variable.
Syntax:
VALUE IS literal.
VALUES ARE literal-1 THRU | THROUGH literal-2
VALUES ARE literal-1, literal-2
Literal can be numeric without quotes OR non-numeric within quotes OR figurative
constant.
SIGN Clause
Syntax
SIGN IS (LEADING) SEPARATE CHARACTER (TRAILING).
It is applicable when the picture string contain S. Default is TRAILING WITH NO
SEPARATE CHARACTER. So S doesnt take any space. It is stored along with last
digit.
Page:51
TRAILING SIGN
(Default)
12N
12E
LEADING SIGN
J25
A25
LEADING
SEPARATE.
-125
+125
Page:52
Refreshing Basics
Nibble.
04 Bits is one nibble. In packed decimal, each nibble stores one digit.
Byte.
08 Bits is one byte. By default, every character is stored in one byte.
Half word.
16 Bits or 2 bytes is one half word. (MVS)
Full word.
32 Bits or 4 bytes is one full word. (MVS)
Double word. 64 Bits or 8 bytes is one double word. (MVS)
Usage Clause
DISPLAY
Default. Number of bytes required equals to the size of the data item.
COMP
Binary representation of data item.
PIC clause can contain S and 9 only.
S9(01) S9(04)
Half word.
S9(05) S9(09)
Full word.
S9(10) - S9(18)
Double word.
Most significant bit is ON if the number is negative.
COMP-1:
Single word floating point item. PIC Clause should not be specified.
The sign is contained in the first bit of the of the leftmost byte and the
exponent is contained in the remaining 7 bits of the first byte. The last
3 bytes contain the mantissa.
COMP-2:
Double word floating-point item. PIC Clause should not be specified.
7 bytes are used for mantissa and hence used for high precision
calculation.
COMP-3:
Packed Decimal representation. One digit takes half byte.
PIC 9 (N) comp-3 data item would require (N + 1)/2 bytes. The sign is
stored separately in the rightmost half-byte regardless of whether S is
specified in the PICTURE or not.
C Signed Positive D Signed Negative F-Unsigned Positive.
INDEX
It is used for preserve the index value of an array. It takes 4 bytes.
PIC Clause should not be specified. When the clause is specified for a
group item, it applies to all elementary items contained in it. However,
the group itself is not an index data item.
POINTER
4 Byte elementary item that can be used to accomplish limited base
addressing. It can be used only in SET statement, Relation condition,
USING phrase of a CALL statement, an ENTRY statement or the
procedure division statement. A value clause for a pointer data item
can contain only NULL or NULLS.
SET identifier-1 TO address of identifier-2
SET ADDRESS OF identifier-2 TO identifier-1.
Identifier-1 is POINTER data item and identifier-2 is linkage section
item.
Page:53
X(6).
9(6)
S9 (4) V99
COMP SYNC.
COMP-3.
The Starting address of Full-word should end with 0,4,8 or C and that of half-word
should end with 0,2,4,6,8,A,C,E. If DATA-ONE starts at 0, it will occupy 0-5 bytes in
memory. DATA-TWO - a sync item of full word cannot start at 6. So by SYNC rule, it
starts at 8th position. 6th & 7th bytes are unused. So MY-DATA occupies 16 bytes.
REDEFINES
The REDEFINES clause allows you to use different data description entries to
describe the same computer storage area. Redefining declaration should immediately
follow the redefined item and should be done at the same level. Multiple redefinitions
are possible. Size of redefined and redefining need not be the same. It cannot be
done at 66 and 88 levels.
Example:
01 WS-DATE
PIC 9(06).
01 WS-REDEF-DATE REDEFINES WS-DATE.
05
WS-YEAR
PIC 9(02).
05
WS-MON
PIC 9(02).
05
WS-DAY
PIC 9(02).
RENAMES
It is used for regrouping of elementary data items in a record. It should be declared
at 66 level. It need not immediately follows the data item, which is being renamed.
But all RENAMES entries associated with one logical record must immediately follow
that record's last data description entry. RENAMES cannot be done for a 01, 77, 88
or another 66 entry. It cannot be done for occurrences of an array.
01 WS-REPSONSE.
05 WS-CHAR143
PIC X(03).
05 WS-CHAR4
PIC X(04).
66 ADD-REPSONSE RENAMES WS-CHAR143.
CONDITION name
It is identified with special level 88. A condition name specifies the value that a field
can contain and used as abbreviation in condition checking.
01 SEX PIC X.
88 MALE
VALUE 1
88 FEMALE VALUE 2 3.
IF SEX=1 can also be verified as IF MALE in Procedure division.
SET FEMALE TO TRUE moves value 2 to SEX. If multiple values are coded on
VALUE clause, the first value will be moved when it is set to true.
Page:54
JUSTIFIED RIGHT
This clause can be specified with alphanumeric and alphabetic items for right
justification. It cannot be used with 66 and 88 level items.
OCCURS Clause
OCCURS Clause is used to allocate physically contiguous memory locations to
store the table values and access them with subscript or index. Detail explanation is
given in Table Handling section.
LINKAGE SECTION
It is used to access the data that are external to the program. JCL can send
maximum 100 characters to a program thru PARM. Linkage section MUST be coded
with a half word binary field, prior to actual field. If length field is not coded, the first
two bytes of the field coded in the linkage section will be filled with length and so
there will be last 2 bytes data truncation in the actual field.
01 LK-DATA.
05 LK-LENGTH
PIC S9(04) COMP.
05 LK-VARIABLE
PIC X(08).
LINKAGE section of sub-programs will be explained later.
Page:55
Procedure Division.
This is the last division and business logic is coded here. It has user-defined sections
and paragraphs. Section name should be unique within the program and paragraph
name should be unique within the section.
Procedure division statements are broadly classified into following categories.
Statement Type
Imperative
Conditional
Compiler Directive
Explicit Scope
terminator
Implicit Scope
terminator
Meaning
Direct the program to take a specific action.
Ex: MOVE ADD EXIT GO TO
Decide the truth or false of relational condition and based on
it, execute different paths.
Ex: IF, EVALUATE
Directs the compiler to take specific action during compilation.
Ex: COPY SKIP EJECT
Terminate the scope of conditional and imperative statements.
Ex: END-ADD END-IF END-EVALUATE
The period at the end of any sentence, terminates the scope of
all previous statements not yet terminated.
MOVE Statement
It is used to transfer data between internal storage areas defined in either file section
or working storage section.
Syntax:
MOVE identifier1/literal1/figurative-constant TO identifier2 (identifier3)
Multiple move statements can be separated using comma, semicolons, blanks or the
keyword THEN.
Numeric move rules:
A numeric or numeric-edited item receives data in such a way that the
decimal point is aligned first and then filling of the receiving field takes place.
Unfilled positions are filled with zero. Zero suppression or insertion of editing
symbols takes places according to the rules of editing pictures.
If the receiving field width is smaller than sending field then excess digits, to
the left and/or to the right of the decimal point are truncated.
Alphanumeric Move Rules:
Alphabetic, alphanumeric or alphanumeric-edited data field receives the data
from left to right. Any unfilled field of the receiving filed is filled with spaces.
When the length of receiving field is shorter than that of sending field, then
receiving field accepts characters from left to right until it is filled. The
unaccomodated characters on the right of the sending field are truncated.
When an alphanumeric field is moved to a numeric or numeric-edited field,
the item is moved as if it were in an unsigned numeric integer mode.
CORRESPONDING can be used to transfer data between items of the same
names belonging to different group-items by specifying the names of group-items to
which they belong.
Page:56
Value of A
Picture of B
PIC 99V99
PIC 99V99
PIC 99V999
PIC9(05)V9(03)
PIC 9(04)V9(02)
PIC 99V99
PIC X(04)
12.35
12.35
12.345
54321.543
23.24
00.34
MUSA
PIC 999V99
PIC 9999V9999
PIC 9V99
PIC 9(03)V9(03)
PIC ZZZ99.9
PIC $$$.99
XBXBXB
Value of B after
Move
012.35
0012.3500
2.34
321.543
23.2
$.34
MUS
ARITHMETIC VERBS
All the possible arithmetic operations in COBOL using ADD, SUBTRACT,
MULTIPLY and DIVIDE are given below:
Arithmetic Operation
ADD A TO B
ADD A B C TO D
ADD A B C GIVING D
ADD A TO B C
SUBTRACT A FROM B
SUBTRACT A B FROM
C
SUBTRACT A B FROM
C GIVING D
MULTIPLY A BY B
MULTIPLY A BY B
GIVING C
DIVIDE A INTO B
DIVIDE A INTO B
GIVING C
DIVIDE A BY B
GIVING C
DIVIDE A INTO B
GIVING C
REMAINDER D
A
A
A
A
A
A
A
B
A+B
B
B
A+B
B-A
B
C
C
A+C
A+B+C+D
A+B+C
A
A
A*B
B
A*B
A
A
B/A
B
B/A
A/B
Integer (B/A)
C-(A+B)
C-(A+B)
Integer
remainder
Page:57
ROUNDED option
With ROUNDED option, the computer will always round the result to the
PICTURE clause specification of the receiving field. It is usually coded after the field
to be rounded. It is prefixed with REMAINDER keyword ONLY in DIVIDE operation.
ADD A B GIVING C ROUNDED.
DIVIDE..ROUNDED REMAINDER
Caution: Dont use for intermediate computation.
ON SIZE ERROR
If A=20 (PIC 9(02)) and B=90 (PIC 9(02)), ADD A TO B will result 10 in B
where the expected value in B is 110. ON SIZE ERROR clause is coded to trap such
size errors in arithmetic operation.
If this is coded with arithmetic statement, any operation that ended with SIZE
error will not be carried out but the statement follows ON SIZE ERROR will be
executed.
ADD A TO B ON SIZE ERROR DISPLAY ERROR!.
COMPUTE
COMPUTE statement assigns the value of an arithmetic operation (on the right
hand side) to a data item (on the left hand side).
Rule: Left to right 1.Parentheses
(( ))
2.Exponentiation
(**)
3.Multiplication and Division (* and /)
4.Addition and Subtraction (+ and -)
Caution: When ROUNDED is coded with COMPUTE, some compiler will do rounding
for every arithmetic operation and so the final result would not be precise.
All arithmetic operators have their own explicit scope terminators. (END-ADD,
END-SUBTRACT, END-MULTIPLY, END-DIVIDE, END-COMPUTE). It is suggested to use
them.
CORRESPONDING is available for ADD and SUBTRACT only.
INITIALIZE
VALUE clause is used to initialize the data items in the working storage
section whereas INITIALIZE is used to initialize the data items in the procedure
division.
INITIALIZE sets the alphabetic, alphanumeric and alphanumeric-edited items
to SPACES and numeric and numeric-edited items to ZERO. This can be overridden
by REPLACING option of INITIALIZE. FILLER, OCCURS DEPENDING ON items are not
affected.
Syntax:
INITIALIZE identifier-1
REPLACING (ALPHABETIC/ALPHANUMERIC/ALPHA-NUMERIC-EDITED
NUMERIC/NUMERIC-EDITED)
DATA BY (identifier-2 /Literal-2)
Example:
01
A.
Page:58
05
A1
PIC
9(5).
05
A2
PIC
X(4).
INITIALIZE A REPLACING NUMERIC DATA BY 50. will initialize only A1 by 50.
ACCEPT
ACCEPT can transfer data from input device or system information contain in
the reserved data items like DATE, TIME, DAY.
ACCEPT WS-VAR1 (FROM DATE/TIME/DAY/OTHER SYSTEM VARS).
If FROM Clause is not coded, then the data is read from terminal. At the time
of execution, batch program will ABEND if there is no in-stream data from JCL and
there is no FROM clause in the ACCEPT clause.
DATE option returns six digit current date in YYMMDD
DAY returns 5 digit current date in YYDDD
TIME returns 8 digit RUN TIME in HHMMSSTT
DAY-OF-WEEK returns single digit whose value can be 1-7 (Monday-Sunday
respectively)
DISPLAY
It is used to display data. By default display messages are routed to SYSOUT.
Syntax:
DISPLAY identifier1| literal1 (UPON mnemonic name)
STOP RUN, EXIT PROGRAM & GO BACK
STOP RUN is the last executable statement of the main program. It returns
control back to OS.
EXIT PROGRAM is the last executable statement of sub-program. It returns
control back to main program.
GOBACK can be coded in main program as well as sub-program as the last
statement. It just gives the control back from where it received the control.
ALTER statement
The alter statement is used to modify the targets of GO TO statements
written elsewhere in the procedure division.
ALTER PROCEDURE-NAME-1 TO [PROCEED TO] PROCEDURE-NAME-2
[PROCEDURE-NAME-3 TO {PROCEED TO} PROCEDURE-NAME-4 ]....
Each of the PROCEDURE-NAME-1, PROCEDURE-NAME-3 is the name of the
paragraph that contains only one sentence. This sentence must consist of a single
GO TO statement without the depending clause.
During the execution each of the PROCEDURE-NAME-1, PROCEDURE-NAME-3,
. . . will be replaced by PROCEDURE-NAME-2, PROCEDURE-NAME-4 ...respectively.
Collating Sequence
There are two famous Collating Sequence available in computers. IBM and
IBM Compatible machine use EBCDIC collating sequence whereas most micro and
many mainframe systems use ASCII collating sequence. The result of arithmetic and
alphabetic comparison would be same in both collating sequences whereas the same
is not true for alphanumeric comparison.
EBCDIC (Ascending Order)
Page:59
Special Characters
a-z
A-Z
0-9
Special Characters
0-9
A-Z
a-z
Page:60
IF identifier is POSITIVE/NEGATIVE/ZERO
CLASS test is used to check the content of data item against pre-defined range of
values. It can be done as follows IF identifier is NUMERIC/ALPHABETIC/ALPHABETIC-HIGHER/
ALPHABETIC-LOWER
We can define our own classes in the special names paragraph. We have defined a
class DIGIT in our special names paragraph. It can be used in the following way.
IF identifier is DIGIT
Negated conditions.
Any simple, relational, class, sign test can be negated using NOT.
But it is not always true that NOT NEGATIVE is equal to POSITIVE. (Example ZERO)
EVALUATE
With COBOL85, we use the EVALUATE verb to implement the case structure of
other languages. Multiple IF statements can be efficiently and effectively replaced
with EVALUATE statement. After the execution of one of the when clauses, the
control automatically comes to the statement following END-EVALUATE. Any complex
condition can be given in the WHEN clause. Break statement is not needed, as it is so
in other languages.
General Syntax
EVALUATE subject-1 (ALSO subject2..)
WHEN object-1 (ALSO object2..)
WHEN object-3 (ALSO object4..)
WHEN OTHER imperative statement
END-EVALUATE
1.Number of Subjects in EVALUATE clause should be equal to number of
objects in every WHEN clause.
2.Subject can be variable, expression or the keyword TRUE/ FLASE and
respectively objects can be values, TRUE/FALSE or any condition.
3.If none of the WHEN condition is satisfied, then WHEN OTHER path will be
executed.
Sample
EVALUATE SQLCODE ALSO TRUE
WHEN 100 ALSO A=B
WHEN -305 ALSO (A/C=4)
DISPLAY ALLOWED SQLCODE..PROCEEDING..
WHEN OTHER imperative statement
END-EVALUATE
In the above example, display will be thrown when one of the first two WHEN clauses
are true.
Page:61
PERFORM STATEMENTS
PERFORM will be useful when you want to execute a set of statements in
multiple places of the program. Write all the statements in one paragraph and invoke
it using PERFORM wherever needed. Once the paragraph is executed, the control
comes back to next statement following the PERFORM.
1.SIMPLE PERFORM.
PERFORM PARA-1.
DISPLAY PARA-1 executed
STOP RUN.
PARA-1.
Statement1
Statement2.
It executes all the instructions coded in PARA-1 and then transfers the control
to the next instruction in sequence.
2.INLINE PERFORM.
When sets of statements are used only in one place then we can group all of
them within PERFORM END-PERFORM structure. This is called INLINE PERFORM.
This is equal to DO..END structure of other languages.
PERFORM
ADD A TO B
MULTIPLE B BY C
DISPLAY VALUE OF A+B*C C
END-PERFORM
3. PERFORM PARA-1 THRU PARA-N.
All the paragraphs between PARA-1 and PARA-N are executed once.
4. PERFORM PARA-1 THRU PARA-N UNTIL condition(s).
The identifiers used in the UNTIL condition(s) must be altered within the
paragraph(s) being performed; otherwise the paragraphs will be performed
indefinitely. If the condition in the UNTIL clause is met at first time of execution, then
named paragraph(s) will not be executed at all.
5. PERFORM PARA-1 THRU PARA-N N TIMES.
N can be literal defined as numeric item in working storage or hard coded
constant.
6. PERFORM PARA-1 THRU PARA-N VARYING identifier1
FROM identifier 2 BY identifier3 UNTIL condition(s)
Initialize identifier1 with identifier2 and test the condition(s). If the condition
is false execute the statements in PARA-1 thru PARA-N and increment identifier1 BY
Page:62
identifier3 and check the condition(s) again. If the condition is again false, repeat
this process till the condition is satisfied.
7.PERFORM PARA-1 WITH TEST BEFORE/AFTER UNTIL condition(s).
With TEST BEFORE, Condition is checked first and if it found false, then PARA1 is executed and this is the default. (Functions like DO- WHILE)
With TEST AFTER, PARA-1 is executed once and then the condition is checked.
(Functions like DO-UNTIL)
Refer Table session for eighth type of PERFORM.
EXIT statement.
COBOL reserved word that performs NOTHING. It is used as a single
statement in a paragraph that indicate the end of paragraph(s) execution.
EXIT must be the only statement in a paragraph in COBOL74 whereas it can be used
with other statements in COBOL85.
GO TO Usage:
In a structured top-down programming GO TO is not preferable. It offers
permanent control transfer to another paragraph and the chances of logic errors is
much greater with GO TO than PERFORM. The readability of the program will also be
badly affected.
But still GO TO can be used within the paragraphs being performed. i.e. When
using the THRU option of PERFORM statement, branches or GO TO statements, are
permitted as long as they are within the range of named paragraphs.
PERFORM 100-STEP1 THRU STEP-4
..
100-STEP-1.
ADD A TO B GIVING C.
IF D = ZERO DISPLAY MULTIPLICATION NOT DONE
GO TO 300-STEP3
END-IF.
200-STEP-2.
MULTIPLY C BY D.
300-STEP-3.
DISPLAY VALUE OF C: C.
Here GO TO used within the range of PERFORM. This kind of Controlled GO TO is fine
with structured programming also!
Page:63
TABLES
An OCCURS clause is used to indicate the repeated occurrences of items of
the same format in a structure. OCCURS clause is not valid for 01, 77, 88 levels.
It can be defined as elementary or group item. Initialization of large table
occurrences with specific values are usually done using perform loops in procedure
division. Simple tables can be initialized in the following way.
01 WEEK-ARRAY VALUE MONTUEWEDTHUFRISATSUN.
05 WS-WEEK-DAYS OCCURS 7 TIMES PIC X(03).
Dynamic array is the array whose size is decided during runtime just before the
access of first element of the array.
01 WS-MONTH-DAY-CAL.
05 WS-DAYS OCCURS 31 TIMES DEPENDING ON WS-OCCURENCE.
IF MONTH = FEB MOVE 28 to WS-OCCURRENCE.
Array Items can be accessed using INDEX or subscript and the difference
between them are listed in the table. Relative subscripts and relative indexes are
supported only in COBOL85. Literals used in relative subscripting/indexing must be
an unsigned integer.
ADD WS-SAL(SUB) WS-SAL(SUB + 1) TO WS-SAL(SUB + 2).
Sl #
1
2
3
4
5
Subscript
Working Storage item
It means occurrence
Occurrence, in turn translated to
displacement to access elements
and so slower than INDEX access.
It can be used in any arithmetic
operations or for display.
Subscripts can be modified by any
arithmetic statement.
Index
Internal Item No need to declare it.
It means displacement
Faster and efficient.
It cannot be used for arithmetic
operation or for display purpose.
INDEX can only be modified with SET,
SEARCH and PERFORM statements.
Page:64
SEARCH
When the requirement is to randomly access sequential information, the only
possible way is to load the information in an array and look up in the array for
the information requested.
This table look up can be done in two ways:
1. Sequential Search (SEARCH)
2. Binary Search (SEARCH ALL)
To use SEARCH/SEARCH ALL, table should have an index. To use SEARCH ALL the
table should be in a sorted order.
Page:65
Sequential SEARCH
SEARCH
Table should have INDEX
Table need not be in SORTED
order.
5.
6
Syntax:
SET indexname-1 TO 1.
SEARCH identifier-1 AT END
Display No Match:
WHEN condition-1
Statements /NEXT SENTENCE
WHEN condition-2
Statements /NEXT SENTENCE
END-SEARCH
Identifier-1 should be OCCURS item
and not 01 item.
Binary SEARCH
SEARCH ALL
Table should have INDEX
Table should be in sorted order of the
searching argument. There should be
ASCENDING/DESCENDING Clause.
Only one WHEN condition can be coded.
Only = is possible. Only AND is possible
in compound conditions.
Index need not be set to 1. The logic is
compare the item to be searched with
the item at the mid of the table. If it
matches fine, else repeat the process
with left or right half depending on where
the item lies. For this logic to work fine,
the array should be sorted on the item
being searched.
Preferred in the following cases:
When the size of the table is large and all
the elements in the table will be
frequently accessed and table does not
contain any duplicates and contain a
unique key.
Syntax:
SEARCH ALL Identifier-1 AT END
Display No Match:
WHEN condition-1
Statements /NEXT SENTENCE
END-SEARCH
Page:66
IDENTIFICATION DIVISION.
PROGRAM-ID. PGMB
Page:67
SORT-MODE-SIZE (SMS=nnnnn)
SORT-RETURN(return-code of sort) and
SORT-CONTROL (Names the file of control card default is IGZSRTCD)
Page:68
Page:69
STRING MANIPULATION
A string refers to a sequence of characters. String manipulation operations
include finding a particular character/sub-string in a string, replacing particular
character/sub-string in a string, concatenating strings and segmenting strings.
All these functions are handled by three verbs INSPECT, STRING and UNSTRING in
COBOL. EXAMINE is the obsolete version of INSPECT supported in COBOL74.
INSPECT- FOR COUNTING
It is used to tally the occurrence of a single character or groups of characters in a
data field.
INSPECT identifier-1 TALLYING identifier-2 FOR
ALL|LEADING literal-1|identifier-3
[BEFORE|AFTER INITIAL identifier-4|literal-2]
- Optional.
- Optional.
Page:70
STRING
STRING command is used to concatenate one or more strings.
Syntax:
STRING identifier-1 / literal-1, identifier-2/ literal-2
DELIMITED BY (identifier-3/literal-3/SIZE)
INTO identifier-4
END-STRING.
01 VAR1 PIC X(10) VALUE MUTHU
01 VAR2 PIC X(10) VALUE SARA
01 VAR2 PIC X(20).
Page:71
Page:72
Passl By Reference
CALL sub1 USING BY
REFERENCE WS-VAR1
It is default in COBOL. BY
REFERENCE is not needed.
Address of WS-VAR1 is passed
The sub-program modifications
Pass By Content
CALL sub1 USING BY CONTENT WS-VAR1
(BY CONTENT keyword is needed)
BY CONTENT key word is mandatory to
pass an element by value.
Value of WS-VAR1 is passed
The sub-program modifications on the
Page:73
STATIC Call
Identified by Call literal.
Ex: CALL PGM1.
3.
4
5
6
7
DYNAMIC Call
Identified by Call variable and the
variable should be populated at run time.
01 WS-PGM PIC X(08).
Move PGM1 to WS-PGM
CALL WS-PGM
If you want convert the literal calls into
DYNAMIC, the program should be
compiled with DYNAM option.
By default, call variables and any unresolved calls are considered as dynamic.
If the subprogram undergoes change,
recompilation of subprogram is enough.
Sub modules are picked up during run
time from the load library.
Size of load module will be less.
Slow compared to Static call.
More flexible.
INTRINSIC FUNCTIONS:
LENGTH
MAX
MIN
NUMVAL
NUMVAL-C
Returns the length of the PIC clause. Used for finding length of group
item that spanned across multiple levels.
Returns the content of the argument that contains the maximum value
Returns the content of the argument that contains the minimum value
Returns the numeric value represented by an alphanumeric character
string specified in the argument.
Same as NUMVAL but currency and decimal points are ignored during
conversion.
Returns 21 Chars alphanumeric value YYYYMMDDHHMMSSnnnnnn
CURRENT
DATE
INTEGER OF DATE
INTEGER OF DAY
DATE OF INTEGER
DAY OF INTEGER
Returns
Returns
Returns
Returns
Page:74
Page:75
FILE HANDLING
A data file is collection of relevant records and a record is collection of
relevant fields. The file handling in COBOL program involves five steps.
Steps in file-handing
1.Allocation: The files used in the program should be declared in FILECONTROL paragraph of environment division. The mapping with JCL DDNAME is done
here. The file is allocated to your program by this statement.
2.Definition. The layout of the file and its attributes are defined in the FILE
SECTION of DATA DIVISION.
3.Open: Dataset is connected/readied to your program using OPEN
statement. The mode of OPEN decides the operation allowed and the initial pointer in
the dataset. For example, EXTEND mode allows only write access and the pointer is
kept on the end of file to append.
4.Process: Process the file as per requirement, using the I-O statements
provided by COBOL. (READ, WRITE, REWRITE and DELETE)
5. Close: After the processing, close the file to disconnect it from the
program.
Allocation of file - SELECT Statement
(ENVIRONMENT-> INPUT-OUTPUT-> FILE-CONTROL)
SELECT [OPTIONAL] FILENAME ASSIGN to DDNAME
ORGANIZATION IS SEQUENTIAL/INDEXED/RELATIVE
ACCESS IS SEQUNETIAL/RANDOM/DYNAMIC
RECORD KEY IS FILE-KEY1
RELATIVE KEY IS WS-RRN
ALTERNARE RECORD KEY IS FILE-KEY2 WITH DUPLICATES
ALTERNARE RECORD KEY IS FILE-KEY3 WITHOUT DUPLICATES
FILE STATUS IS WS-FILE-STAT1
[,WS-FILE-STAT2]
=>ALL Files
=>ALL Files
=>ALL Files
=>KSDS
=>RRDS
=>KSDS with
=>AIX
=>ALL Files
=>VSAM Files
Page:76
//DDNAME DD DSN=BPMAIN.EMPLOYEE.DATA,DISP=SHR
SELECT Statement-ORGANIZATION
It can be SEQUENTIAL (PS or VSAM ESDS), INDEXED (VSAM KSDS),
RELATIVE (VSAM RRDS). Default is Sequential.
SELECT Statement-ACCESS MODE
SEQUENTIAL.
It is default access mode and it is used to access the records ONLY in
sequential order. To read 100th record, first 99 records need to be read and skipped.
RANDOM.
Records can be randomly accessed in the program using the
primary/alternate key of indexed file organization or relative record number of
relative organization.100th record can directly be read after getting the address of the
record from the INDEX part for INDEXED files.100th record can directly be read for
RELATIVE files even without any index.
DYNAMIC.
It is mixed access mode where the file can be accessed in random as well as
sequential mode in the program.
Example: Reading the details of all the employees between 1000-2000. First
randomly access 1000th employee record, then read sequentially till 2000th employee
record. START and READ NEXT commands are used for this purpose in the procedure
division.
SELECT Statement-RECORD KEY IS
It is primary key of VSAM KSDS file. It should be unique and part of indexed
record structure.
SELECT Statement-ALTERNATE RECORD KEY IS
This phrase is used for KSDS files defined with AIX. Add the clause WITH
DUPLICATES if the AIX is defined with duplicates.
Referring to VSAM basics, every alternate index record has an associated
PATH and the path should be allocated in the JCL that invokes this program.
The DDNAME of the path should be DDNAME of the base cluster suffixed with
1 for the first alternate record clause, suffixed with n for nth ALTERNATE RECORD
KEY clause in SELECT clause.
SELECT Statement-FILE STATUS IS WS-FILE-STAT1,WS-FILE-STAT2
WS-FILE-STAT1 should be defined as PIC X(02) in working storage section.
After every file operation, the file status should be checked for allowable values.
WS-FILE-STAT2 can be coded for VSAM files to get the VSAM return code (2
bytes), VSAM function-code (1 byte) and VSAM feedback code (3 bytes).
This is a 6- byte field in working storage.
RESERVE Clause.
RESERVE clause [RESERVE integer AREA ] can be coded in the SELECT
statement. The number of buffers to be allocated for the file is coded here.
By default two buffers will be allocated if the clause is not coded. Since similar option
is available in JCL, this is not coded in program.
Page:77
RESERVE 1 AREA allocates one buffer, for the file in the SELECT statement.
Page:78
OPEN STATEMENT
Syntax:
OPEN OPENMODE FILENAME
OPENMODE can be INPUT OUTPUT I-O EXTEND
INPUT
- File can be used ONLY-FOR-READ purpose.
OUTPUT
- File can be used ONLY-FOR-WRITE purpose.
I-O
- File can be used FOR READ, WRITE and REWRITE purpose.
EXTEND
- File can be used FOR appending records using WRITE.
CLOSE statement.
The used files are closed using CLOSE statement. If you dont close the files,
the completion of the program closes all the files used in the program.
Syntax:
CLOSE FILENAME
OPEN and CLOSE for TAPE files - Advanced
If more than one file is stored in a reel of tape, it is called as multi-file
volume. When one file is stored in more than one reel of tape, it is called as
multi-volume label. One reel is known as one volume. When the end of one volume is
reached, automatically the next volume opens. So there is no special control is
needed for multi volume files.
OPEN INPUT file-1 [WITH NO REWIND | REVERSED]
OPEN OUTPUT file-2 [WITH NO REWIND]
CLOSE file-3 [{REEL|UNIT} [WITH NO REWIND| FOR REMOVAL]
CLOSE file-3 [WITH NO REWIND|LOCK]
UNIT and REEL are synonyms.
After opening a TAPE file, the file is positioned at its beginning. When opening
the file if the clause REVERSED is coded, then the file can be read in the REVERESE
direction. (Provided hardware supports this feature)
When you close the file, the tape is normally rewound. The NO REWIND
clause specifies that the TAPE should be left in its current position.
CLOSE statement with REEL option closes the current reel alone. So the next
READ will get the first record of next REEL. This will be useful when you want skip all
the records in the first reel after n number of records processing.
Since TAPE is sequential device, if you create multiple files in the same TAPE,
then before opening the second file, first file should be closed. At any point of time,
you can have only one file is active in the program. In addition to this, you have to
code MULTIPLE FILE clause in the I-O control paragraph of environment division.
MULTIPLE FILE TAPE CONTAINS
OUT-FILE1 POSITION 1
OUT-FILE3 POSITION 3.
The files OUT-FILE1 and OUT-FILE3 used in the program are part of a same
TAPE and they exist in first and third position in the tape. Alternatively, this
information can be passed from JCL using LABEL parameter.
Page:79
READ statement
READ statement is used to read the record from the file.
Syntax:
READ FILENAME [INTO ws-record] [KEY IS FILE-KEY1]
[AT END/INVALID KEY imperative statement1]
[NOT AT END/NOT INVALID KEY imperative statement2]
END-READ
If INTO clause is coded, then the file is directly read into working storage
section record. It is preferred as it avoids another move of file-section-record to
working-storage-record followed by simple READ. READ-INTO is not preferred for
variable size records where the length of the record being read is not known.
KEY IS clause is used while accessing a record randomly using
primary/alternate record key.
AT END and NOT AT END are used during sequential READ of the file.
INVALID KEY and NOT INVALID KEY are used during random read of the file.
Before accessing the file randomly, the key field should have a value before READ.
WRITE Statement
Write statement is used to write a new record in the file. If the file is opened
in EXTEND mode, the record will be appended. If the file is opened in OUTPUT mode,
the record will be added at the current position.
Syntax:
FROM clause avoids the explicit move of working storage record to file section record
before WRITE.
REWRITE Statement
REWRITE is used to update an already read record. To update a record in a
file, the file should be opened in I-O mode.
Syntax:
REWRITE FILE-RECORD [FROM ws-record]
[INVALID KEY imperative statement1]
END-REWRITE
START Statement
START is used with dynamic access mode of indexed files. It establishes the
current location in the cluster for READ NEXT statement. START itself does not
retrieve any record.
Syntax:
START FILENAME
KEY is EQUAL TO/NOT LESS THAN/GREATER THAN key-name
[INVALID KEY imperative statement1]
Page:80
END-START.
DELETE Statement
DELETE is used to delete the most recently read record in the file. To delete a
record, the file should be opened in I-O mode.
Syntax:
DELETE FILENAME RECORD
[INVALID KEY imperative statement1]
END-DELETE
Reports FBA:
Reports contain the printing control character in the first byte. The record format
will be FBA and the LRECL will be 133 in the JCL. The program can define
printing control character and populate it manually or can define the layout with
132 bytes and by means of program supported WRITE verbs like
AFTER/BEFORE/ADVANCING. In the second case printing control character will
be automatically added at the time of compilation by the default ADV compiler
option.
Printing control character
(blank)
0
+
1
Page:81
Page:82
careful while filling in the record area of the output file. This may destroy the record
read most recently.
Syntax: SAME RECORD AREA FOR file-1 file-2 file-3.
SAME SORT AREA allows more than one sort/merge work files to use the
same area. The sort work files are automatically allocated when file is opened and
de-allocated when file is closed. As the sort file is automatically opened and closed
during a SORT and two sort files cannot be opened at a time, this clause may not be
useful.
Syntax: SAME SORT|SORT-MERGE AREA for file-1 file-2.
File-1 or file-2 should be a SD file.
I-O CONTROL- RERUN Clause
RERUN ON rescue FOR EVERY integer RECORDS on file-1
This will cause checkpoint to be taken for every integer-1 records processing of
file-1. If the program ABENDED before the complete processing of the file-1, then
the program will restart from integer+1ST record instead of first record. The rescue
file details should be mentioned outside the program and it varies from installation to
installation.
ENTRY statement
ENTRY statement establishes an alternate ENTRY point in a COBOL called
sub-program. When a CALL statement naming the alternate entry point is executed
in a calling program, control is transferred to the next executable statement following
the entry statement. Except when a CALL statement refers to an entry name, the
ENTRY statements are ignored at run-time.
Page:83
Page:84
COBOL COMPILATION
SYSPRINT
(Compiler listing)
PARM
(Compiler
Options)
SYSIN
(Source)
IGYCRCTL
(COBOL COMPILER)
SYSLIB
(Copybook Library)
SYSLMOD
(Load Module)
SYSPRINT
(Link edit messages)
SYSLIN(Object Module)
PARM
(Link
edit Options)
IEWL
(Link Editor)
SYSLIB
(Subroutine Library)
COMPILATION JCL:
//SMSXL86B JOB ,'COMPILATION JCL', MSGCLASS=Q,MSGLEVEL=(1,1),CLASS=C
//COMPILE1 EXEC PGM=IGYCRCTL, PARM=XREF,APO,ADV,MAP,LIST),REGION=0M
//STEPLIB
DD DSN=SYS1.COB2LIB,DISP=SHR
//SYSIN
DD DSN=SMSXL86.TEST.COBOL(SAMPGM01),DISP=SHR
//SYSLIB
DD DSN=SMSXL86.COPYLIB,DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSLIN
DD DSN=&&LOADSET, DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200),
//
DISP=(NEW,PASS),UNIT=SYSDA,SPACE=(CYL,(5,10),RLSE),
//SYSUT1
DD UNIT=&SYSDA,SPACE=(CYL,(1,10)) => Code SYSUT2 to UT7
//LINKEDT1 EXEC PGM=IEWL,COND=(4,LT)
//SYSLIN
DD DSN=&&LOADSET, DISP=(OLD,DELETE)
//SYSLMOD DD DSN=&&GOSET(SAMPGM01),DISP=(NEW,PASS),UNIT=SYSDA
//
SPACE=(CYL,1,1,1))
//SYSLIB
DD DSN=SMSXL86.LOADLIB,DISP=SHR
//SYSUT1
DD UNIT=SYSDA,SPACE=(CYL,(1,10))
//SYSPRINT DD SYSOUT=*
//*** EXECUTE THE PROGRAM ***
//EXECUTE1 EXEC PGM=*.LINKEDT1.SYSLMOD,COND=(4,LT),REGION=0M
//STEPLIB
DD DSN=SMSXL86.LOADLIB,DISP=SHR
//
DD DSN=SYS1.SCEERUN,DISP=SHR
//SYSOUT
DD SYSOUT=*
//SYSPRINT
Page:85
DD SYSOUT=*
Compiler Options
The default options that were set up when your compiler was installed are in
effect for your program unless you override them with other options. To check the
default compiler options of your installation, do a compile and check in the
compilation listing.
Ways of overriding the default options
1.Compiler options can be passed to COBOL Compiler Program (IGYCRCTL) through
the PARM in JCL.
2.PROCESS or CBL statement with compiler options, can be placed before the
identification division.
3.If the organization uses any third party product or its own utility then these options
can be coded in the pre-defined line of the utility panel.
Precedence of Compiler Options
1. (Highest precedence). Installation defaults, fixed by the installation.
2. Options coded on PROCESS /CBL statement
3. Options coded on JCL PARM parameters
4. (Lowest Precedence). Installation defaults, but not fixed.
The complete list of compiler option is in the table:
Aspect
Source Language
Date Processing
Maps and Listing
Object Deck
generation
Object Code
Control
Debugging
Other
Compiler Option
APOST, CMPR2, CURRENCY, DBCS, LIB, NUMBER,
QUOTE, SEQUENCE, WORD
DATEPROC, INTDATE, YEARWINDOW
LANGUAGE, LINECOUNT, LIST, MAP, OFFSET, SOURCE, SPACE,
TERMINAL, VBREF, XREF
COMPILE, DECK, NAME, OBJECT, PGMNAME
ADV, AWO, DLL, EXPORTALL, FASTSRT, OPTIMIZE, NUMPROC,
OUTDD, TRUNC, ZWB
DUMP, FLAG, FLAGMIG, FLAGSTD, SSRANGE, TYPECHK
ADATA, ANALYZE, EXIT, IDLGEN
ADV: It is meaningful if your program has any printer files with WRITE..ADVANCING
keyword. The compiler adds one byte prefix to the original LRECL of printer files for
printing control purpose. If you are manually populating printing control character in
the program, then you can compile your program with NOADV.
DYNAM: Use DYNAM to cause separately compiled programs invoked through the
CALL literal statement to be loaded dynamically at run time. DYNAM causes dynamic
loads (for CALL) and deletes (for CANCEL) of separately compiled programs at object
time. Any CALL identifier statements that cannot be resolved in your program are
also treated as dynamic calls. When you specify DYNAM, RESIDENT is also put into
effect.
Page:86
LIST/OFFSET: LIST and OFFSET are mutually exclusive. If you use both, LIST will be
ignored. LIST is used to produce listing a listing of the assembler language expansion
of your code. OFFSET is used to produce a condensed Procedure Division listing.
With OFFSET, the procedure portion of the listing will contain line numbers,
statement references, and the location of the first instruction generated for each
statement. These options are useful for solving system ABENDS. Refer JCL session
for more details.
MAP:
Use MAP to produce a listing of the items you defined in the Data Division.
SSRANGE: If the program is compiled with SSRANGE option, then any attempt to
refer an area outside the region of the table will abnormally terminate with
protection exception, usually S0C4.It also avoids any meaningless operation on
reference modification like negative number in the starting position of reference
modification expression. If the program is compiled with NOSSRANGE, then the
program may proceed further with junk or irrelevant data. So usually the programs
are compiled with SSRANGE during development and testing.
RENT: A program compiled as RENT is generated as a reentrant object module. CICS
programs should be compiled with RENT option to share the same copy of the
program by multiple transactions (Multithreading)
RESIDENT: Use the RESIDENT option to request the COBOL Library Management
Feature. (The COBOL Library Management Feature causes most COBOL library
routines to be located dynamically at run time, instead of being link-edited with the
COBOL program.).CICS Programs should be compiled with RESIENT option.
XREF: Use XREF to get a sorted cross-reference listing. EBCDIC data-names and
procedure-names will be listed in alphanumeric order. It also includes listing, where
all the data-names that are referenced within your program and the line number
where they are defined. This is useful for identifying the fields that are defined but
not used anywhere after the development of new program.
Page:87
VS COBOL2 (COBOL85)
Allows 31 bit addressing and makes
easier to develop large applications.
Nested Program are allowed Improves
productivity because different
programmers can develop them at the
same time. Promote data sharing and
data protection.
Explicit scope terminators are introduced.
POINTER clause is added.
INSPECT replaces EXAMINE with advance
capabilities.
Page:88
Page:89
Page:90
CLUSTER
A cluster can be thought of as a logical dataset consisting of two separate
physical datasets:
1 The data component (contains the actual data).
2 The index component (contains the actual index).
All types of VSAM datasets are called clusters even though KSDS is the only
type that fulfills the cluster concept. ESDS and RRDS dont have Index component.
Data Component.
Control Intervals and Control Areas
VSAM stores records in the data component of the Cluster in units called
control intervals.
The control interval is the VSAM equivalent for a block and it is the unit of
data that is actually transmitted when records are read or written. Thus many logical
records form a control interval and many control intervals form a control area.
The Control Area (CA) is a fixed length unit of contiguous DASD storage
containing a number of CI s. The control area is VSAM internal unit for allocating
space within a cluster. The primary and secondary allocations consist of some
number of CA. Minimum size of control area is 1 track and maximum size is 1
cylinder.
Format of Control Interval
There are four separate areas within a control Interval.
1. Logical Record Area (LRA) that contains the records.
2. Free Space (FSPC). This area can be used for adding new records.
3. Unused Space. As FSPC may not be a multiple of record length, there will be
always some unused space in every CI. This can be minimized for fixed length
records by selecting proper control interval size.
4. Control Fields.
CIDF Control Interval Definition Field - 4 bytes field containing information
on free space availability in the control interval. One per control interval.
RDF Record Definition Field 3 bytes field. For fixed length records, there
will be 2 RDF, first contains the number of records in the control interval and
the second contains record length. For variable length records, the number of
RDF can vary depending on how many adjacent records have the same length
in the CI. If no two adjacent records are of the same length, then one RDF is
needed to describe each record.
Index Component (Sequence Set and Index Set)
Besides the data component, VSAM creates an index component. Index
component consists of index set and sequence set. The sequence set is the lowest
level of the index is called the sequence set and it contains primary keys and
pointers to the control intervals of the data component.
There is one sequence set for one control area. The highest record key of
every control interval is stored in sequence set. The highest record key of the
sequence set is stored in first level of index set. Based on the size of control interval
of index component, there will be 1-3 levels of index sets in the index component of
the dataset.
Page:91
Page:92
If I am accessing the first record, then sequential read needs only one I-O but
obviously my random read needs more. So we prefer indexed organization only when
the number of records is significant.
* 17
*4
*8
*17
*8
*14
12 14
17
CA-1
CA-2
Root Index
*50
* 23
*17
*23
20 21 23
First Level
Index
*50
*35
*50
33 35
Sequence set
Data Component
50
CA-3
CA-4
Page:93
ESDS
Based on entry
sequence
Access
Only Sequential
access is possible.
Alternate INDEX
Location of the
record
Free Space
Deletion
Cannot be deleted.
REWRITE of same
length is possible.
Fixed or Variable
Possible
Record Size
SPANNED
records
Speciality
Preference
KSDS
Based on collating
sequence by key
field
Sequential and
random access is
possible. Random
access is thru
primary/alternate
key.
May have one or
more alternate
indexes.
RRDS
Based on relative
record number
order
Can be accessed
directly using
relative record
number, which
serves as address.
A relative record
number can be
changed.
Free slots are
available for adding
records at their
location.
Distributed free
space for inserting
records in between
or change the
length of existing
record.
DELETE is possible.
Not applicable
DELETE is possible.
Fixed or Variable
Possible
Fixed.
Not possible
Easy RANDOM
access and most
popular method
Applications that
require each record
to have a key field
and require both
direct and
sequential access.
Ex: Banking
application
Fastest access
method.
Applications that
require only direct
access. There
should be a field in
the record that can
be easily mapped
to RRN.
Page:94
DEFINE
To create objects KSDS/ESDS/RRDS/GDG/VSAMSPACE etc
ALTER
- To alter the parameters of the object already exists.
PRINT
- To print and view the selected records of dataset.
DELETE
Delete the objects.
LISTCAT
View the complete information about any object.
REPRO
- Copy/Restore/Merge Utility for VSAM and NON-VSAM files.
EXPORT and IMPORT
Copy and restore datasets across the system.
8. VERIFY
To properly close the unclosed/ABENDED VSAM dataset.
Sample IDCAMS JCL
//JS10 EXEC PGM=IDCAMS,REGION=1024K, PARM=parameters
//STEPCAT DD DSN=..,DISP=SHR Optional STEPCAT
//SYSPRINT DD SYSOUT=*
IDCAMS Messages
//SYSIN
DD *
Control statements
/*
//
Guidelines for coding commands
1. At least one blank space must be there between the command and the object and
between sub parameter values.
2. A space is optional between a parameter and the parenthesis enclosing its values.
3. Sub-parameter values can be separated using a space or a comma.
4. Multiple parameters can be coded on a line but it is better to place each on a line
by itself for better readability.
5. A parameter and a sub-parameter cannot be separated with a hyphen. A plus sign
should be used for this purpose.
Page:95
DEFINE CLUSTER
This command is used to create and name a VSAM Cluster.
Basic Parameters for Define Cluster
DEFINE CLUSTER-NAME
This parameter specifies name to the VSAM cluster. The cluster name
becomes the dataset name in any JCL that invokes the cluster
// INPUT DD DSN= SMSXL86.TEST.VSAM, DISP=SHR
The name for a VSAM dataset can include up to 44 alphanumeric characters.
When the data and index parameters are coded to create the data and index
components, the name parameter is coded for them as well. If the name parameter
is omitted for the data and index VSAM tries to append part of .DATA or .INDEX as
appropriate as the low level qualifier depending on how many characters the dataset
name contains already and still staying within the 44 character limit.
If data and index components are named, parameter values can be applied
separately. This gives performance advantages for large datasets.
DEFINE CLUSTER-DATA COMPONENT
The data parameter instructs IDCAMS that a separate data component is to
be created. Data is optional but if coded must follow all parameters that apply to the
cluster as a whole. There are several options for the data parameter but name is the
most common.
DEFINE CLUSTER-INDEX COMPONENT
The index parameter creates a separate index component. Index is optional
but if coded must follow all of the parameters that apply only to the data component.
ESDS and RRDS must not have INDEX part.
DEFINE CLUSTER-SPACE Allocation
A VSAM dataset can have 123 extents in a VOLUME. Primary space is
allocated initially when the dataset is created and based on request secondary space
will be allocated. In the best case, you will get (primary +122 * secondary).
Refer the JCL section-SPACE parameter section for more details on EXTENTS.
VSAM calculates the control area size internally. Control area can of one
cylinder, the largest permitted by VSAM, usually yields the best performance. So it is
always better to allocate space in cylinders because this ensures a CA size of one
cylinder.
Page:96
The RECORDS parameter is used to allocate space in units of records for small
datasets. When this is done the RECORDSIZE parameter must be specified.
If allocation is specified in units of KILOBYTES or MEGABYTES VSAM reserves space
on the minimum number of tracks it needs to satisfy the request.
Syntax: UNIT(primary secondary)
UNIT can be CYL/CYLINDERS TRK/TRACKS REC/RECORDSD
KB/KILOBYTES MB/MEGABYTES
Page:97
The size of the CI is specified by the CISZ parameter. The size of the CI is
specified in bytes and it should be a multiple of 512 or 2048 depending on the type
of catalog (ICF or VSAM) and the length of the record.
For datasets cataloged in ICF catalogs the control interval should be a
multiple of 512 bytes with a range of 512 to 32768 bytes.
For datasets cataloged in VSAM catalogs, the data CISZ must be a multiple of
512 if records are of 8192 bytes or less, and a multiple of 2048 if records are of
more than 8192 bytes. If a CISZ, which is not a multiple of the above two is
assigned VSAM rounds the CISZ value up to the next highest multiple if necessary.
Page:98
Page:99
DEFINE CLUSTER
(NAME(NTCI.V.UE4.W20000.T30.AV.DW200006)
CYL(5 1)
KEYS(8 0)
RECSZ(80 80)
KEYRANGES ((00000001 2999999)
(30000000 4700000)
(47000001 9999999))
VOLUMES
(NTTSOB
NTTSOJ
NTTSO5)
ORDERED
NOREUSE
INDEXED
_ _ _ more parameters
When the ORDERED parameter is coded the number of VOLUMES and KEYRANGES
must be the same.
DEFINE CLUSTER- REUSE Parameter (RUS)
This specifies that a cluster can be opened again and again as a reusable
cluster. It means, whenever you open the dataset in OUTPUT mode, all the records
that are already exist in the dataset are logically deleted. NOREUSE (UNRUS) is the
default and specifies the cluster as non-reusable.
A cluster cannot be reused if
1. KEYRANGES parameter is coded for it
2. An alternate index is built for it
3. The cluster has its own data space in both the VSAM and ICF catalog
environments.
DEFINE CLUSTER-IMBED And REPLICATE Parameters (IMBD/ REPL)
These parameters are applicable to a KSDS only. The IMBED parameter
implies that the sequence set (lowermost level) of the index component of a KSDS
will be placed on the first track of a data component CA and will be duplicated as
many times as possible on that track.
What IMBED does for a sequence set REPLICATE does for an index set. The
REPLICATE parameter forces each CI of the index set of the index component to be
written on a separate track and replicated as many times as possible. This parameter
reduces rotational delay when VSAM is accessing high-level index CI.
The IMBED option reduces the seek time it takes for the read-write head to
move from the index to the data component and the replication factor reduces the
rotational delay of the revolving disk.
NOIMBED(NIMBD) and NOREPLICATE(NREPL) are the defaults.
DEFINE CLUSTER-WRITECHECK Parameter (WCK)
This parameter instructs VSAM to invoke a DASD verify facility whenever a
record is written. NOWRITECHECK (NWCK) is the default and provides no DASD
verification. Since contemporary DASD devices are very reliable and because of the
high performance overhead associated with this parameter it is better to accept the
NOWRITECHECK option.
Page:100
Value
1
CR
CR
CS
CS
3
4
cross-system)
Meaning
READ and WRITE Integrity. Any number of jobs can read the
dataset OR only one job can write to the dataset.
ONLY WRITE Integrity. Any number of jobs can read the dataset
AND one job can write to the dataset.
NO Integrity. File is fully shared. It is programmer responsibility to
take proper lock over the file before use. Default value for CS.
Same as 3 but additionally forces a buffer refresh for each random
access.
Page:101
The CODE parameter allows for the specification of a code to display to the
operator in place of the entry name prompt.
The AUTHORIZATION parameter provides for additional security by naming an
assembler User Security Verification Routine (USVR). The sub parameter for this
enclosed in parenthesis is the entry point of the routine.
TO And FOR Parameters
When a dataset is allocated by Access Method Services the TO and FOR
parameters are two mutually exclusive parameters given to specify the retention
period of the cluster being defined.
TO(YYDDD)
or
FOR(nnnn)
YYDDD is Julian date & nnnn can be 1-9999
KSDS Definition
//KSDSMAKE EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=A
//SYSIN DD*
DEFINE CLUSTER
(NAME(EMPLOYEE.KSDS.CLUSTER)
VOLUMES(VSAM02)
CYLINDERS(2,1)
CONTROL INTERVAL SIZE(4096)
FREESPACE(10,20)
KEYS(9,0)
RECORDSIZE(50,50))
DATA
(NAME(EMPLOYEE.KSDS.DATA))
INDEX
(NAME(EMPLOYEE.KSDS.INDEX))
CONTROL INTERVAL SIZE(1024)
CATALOG(VSAM.USERCAT.TWO)
/*
ESDS Definition
//ESDSMAKE EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=A
//SYSIN DD*
DEFINE CLUSTER
(NAME (EMPLOYEE.ESDS.CLUSTER)
VOLUMES (VSAM02)
CYLINDERS (2,1)
CONTROLINTERVALSIZE (4096)
RECORDSIZE (50,50)
NONINDEXED)
DATA
(NAME (EMPLOYEE.KSDS.DATA))
CATALOG (VSAM.USERCAT.TWO)
/*
Page:102
RRDS Definition
//RRDSMAKE EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=A
//SYSIN DD*
DEFINE CLUSTER
(NAME(EMPLOYEE.RRDS.CLUSTER)
VOLUMES(VSAM02)
CYLINDERS(2,1)
CONTROL INTERVAL SIZE(4096)
RECORDSIZE(50,50)
NUMBERED)
DATA
(NAME(EMPLOYEE.KSDS.DATA))
CATALOG(VSAM.USERCAT.TWO)
/*
LDS Definition
//LDSMAKE EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=A
//SYSIN DD*
DEFINE CLUSTER
(NAME(EMPLOYEE.RRDS.CLUSTER)
VOLUMES(VSAM02)
CYLINDERS(2,1)
LINEAR)
DATA
(NAME(EMPLOYEE.KSDS.DATA))
CATALOG(VSAM.USERCAT.TWO)
/*
Page:103
Page:104
length is greater than 32 characters the rest of the record is printed in blocks of 64
hex digits and 32 corresponding characters per line.
PRINT SKIP, COUNT, FROM and TO
The records to be printed can be selected in the same way records are
selected in REPRO to COPY.
Where to start Printing
SKIP(number)
FROMKEY(key-value)
FROMADDRESS(rba)
FROMNUMBER(rrn)
TOKEY(key-value)
TOADDRESS(rba)
TONUMBER(rrn)
Where Used
KSDS, ESDS, RRDS,
non-VSAM
KSDS, ALTERNATE INDEX
KSDS, ESDS
RRDS
Page:105
Meaning
NAME
HISTORY
VOLUME
ALLOCATION
ALL
Page:106
Page:107
EXPORT - ERASE|NOERASE
This specifies whether the data component of the cluster or alternate index to
be exported is to be erased or not (overwritten with binary zeros).
With ERASE specification, the data component is overwritten with binary zeros
when the cluster or alternate index is deleted.
With NOERASE specification, the data component is not overwritten with
binary zeros when the cluster or alternate index is deleted.
Example:
//EXPORT EXEC PGM=IDCAMS
//DD2
DD DSN=SMSXL86.LIB.KSDS.BACKUP(+!),
//
DISP=(NEW,CATLG,DELETE),UNIT=TAPE,
//
VOL=SER=121212,LABEL=(1,SL),
//
DCB=(RECFM=FB,LRECL=80)
Page:108
//SYSIN DD *
EXPORT A2000.LIB.KSDS.CLUSTER
OUTFILE (DD2)
/*
Page:109
Page:110
OUTDATASET(SMSXL86.TEST.KSDS)
IF LASTCC = 0
THEN
LISTCAT
ENTRIES(SMSXL86.TEST.KSDS) ALL
ELSE
DELETE SMSXL86.TEST.KSDS
END-IF
SET MAXCC=0
/*
Severity
of Error
No Error
Minor
Error
Major
Error
Logical
Error
Severe
Error
Meaning
Functional command completed its processing successfully
Processing is able to continue, but a minor error occurred, causing
a warming message to be issued
Processing is able to continue, but a more severe error occurred,
causing major command specifications to be bypassed
Generally, inconsistent parameters are specified, causing the
entire command to be bypassed
An error of such severity occurred that not only can the command
causing the error not be completed, the entire AMS command
stream is flushed
ALTERNATE INDEX
Need for AIX
1. KSDS dataset is currently accessed using a primary key. We want to access
the dataset RANDOMLY with another key.
2. We want to access ESDS file RANDOMLY based on key.
Steps Involved
Step1. Define AIX.
DEFINE ALTERNATEINDEX command defines an alternate index. Important
parameters of this command are:
Parameter
RELATE
Meaning
Relates AIX with base cluster
Page:111
NONUNIQUE/
UNIQUE
KEYS
UPGRADE
RECORDSIZE
Step2. BLDINDEX
Alternate index should have all the alternate keys with their corresponding
primary key pointers. After the AIX is defined, this information should be loaded from
base cluster. Then only we can access the records using AIX. BLDINDEX do this LOAD
operation.
Important parameters of this command are:
1. INFILE and OUTFILE points to Base Cluster and Alternate index Cluster.
2. INTERNALSORT, EXTERNALSORT, WORKFILES:
Loaded AIX should have sorted alternate record keys with base cluster
keys. So an intermediate SORT is required in between reading the base cluster and
loading the AIX. We should allocate work datasets IDCUT1 and IDCUT2 for this
SORT.
Page:112
Example:
1.Command that defines an alternate index
DEFINE AIX ( NAME(MMA2.EMPMAST.SSN.AIX)
RELATE(MMA2.EMPMAST)
KEYS(9 12)
UNIQUEKEY
UPGRADE
Page:113
REUSE
VOLUMES(MPS800) )
DATA
( NAME(MMA2.EMPMAST.SSN.AIX.DATA)
CYLINDERS(1 1) )
INDEX ( NAME(MMA2.EMPMAST.SSN.AIX.INDEX) )
This parameter gives the number of I/O buffers needed for the data
component of the cluster. The size of each buffer is the size of the data CI.
The default value is two data buffers one of which is used only during CI/CA splits.
Therefore the number of data buffers left for normal processing is one.
If more data buffers are allocated, then performance of sequential processing will
improve significantly.
Page:114
BUFNI
This parameter gives the number of I/O buffers needed for the index
component of the cluster. Each buffer is the size of the index. This sub-parameter
may be coded only for a KSDS because ESDS and RRDS do not have index
components. The default value is one index buffer.
If more index buffers are allocated, then performance of random processing
will improve significantly.
BUFSP
This parameter indicates the number of bytes for data and index component
buffers. If this value is more than the value given in the BUFFERSPACE parameter of
the DEFINE CLUSTER, it overrides the BUFFERSPACE. Otherwise BUFFERSPACE takes
precedence. The value of BUFSP is calculated as
BUFSP = DATA CISIZE x BUFND + INDEX CISIZE x BUFNI
However it is recommended not to code this parameter and let VSAM perform the
calculations from the BUFND and BUFNI values instead.
Other parameters are TRACE, STRNO, RECFM, OPTCD, CROPS.
If SMS is active, then VSAM datasets can be created in JCL without using IDCAMS as
below:
//KSDSFILE DD DSN=DEVI.CUST.MASTER,DISP=(NEW,CATLG,DELETE),
//
SPACE=(CYL,(10,10)),
//
LRECL=100,KEYOFF=10,KEYLEN=12,RECORG=KS
RECORG can also be ES(for Entry Sequenced Datasets), RR(for Relative Record
datasets) and LS(for Linear Datasets). Other parameters of DEFINE CLUSTER will be
assigned default values or you can additionally mention SMS parameter, DATACLASS
that is defined with predefined values.
DataBase 2
DB2
Page:115
Page:116
DB2 (Database 2)
History
IBMs first database is IMS. This is a hierarchical database. IBM introduces its second
base on relational concepts in 1980s and it is called as Database 2 (DB2).
Advantage of DBMS over File Management System
1.Increased Independency
If a new field is added to a file, the layout of the file should be changed in all
the programs with new field, though the programs do not use this field. But if a new
column is added to a table, it does not need any program change, as long as the
programs do not need this field. Thus programs using Databases are developed
independent of physical structure.
2.Less Redundancy
In file system, the same data is stored in multiple files to meet the
requirements of different applications. In DB2, we can store the common information
in one table and make all the applications to share it. By means of centralized control
and storing the fact at the right place and only once, data redundancy is reduced.
3.Increased Concurrency and Integrity
Concurrency allows more than one task to share a resource concurrently.
Integrity is defined as validity or correctness of data. In file system, if a file is
allocated with OLD disposition, then no other program can use it. DB2 allows more
than one task to use the same table in update mode. At the same time, integrity of
data is ensured by its sophisticated locking strategy.
4.Improved Security
In file system, all the records in the file are subjected to a specific access
right. The access cannot be controlled at field level or set level. But in DB2, we can
restrict access on column level and set level.
Types of Database
Hierarchical Relation between entities is established using parent-child
relationship-Inverted Tree Structure-This is the first logical model of DBMS. Data
stored in the form of SEGMENTS. Example: IMS
Network
Any entity can be associated with any other entity. So
distinguishing between parent and child is not possible. This is a linked structure
model. Data stored in RECORD and different record types are linked by SETS.
Example: IDMS
Relational
Page:117
DB2 Objects
Storage group is a collection of direct access volumes, all of the same type.
DB2 objects are allocated in Storage group. A storage group can have maximum 133
volumes.
Storage
Group 1
DASD001
DASD002
DASD003
Table1 Row1
Table2 Row1
Table2 Row2
Page 1
Table1 Row2
Table1 Row3
Table3 Row1
Page 2
Table3 Row2
Table1 Row4
Table2 Row3
Page 3
Table1 Row1
Table1 Row2
Table1 Row3
Segment 1
Table2 Row1
Table2 Row2
Table2 Row3
Segment 2
Table1 Row4
Table1 Row5
Table1 Row6
Segment 3
Page:118
Table1 Row1
Table1 Row2
Table1 Row3
Partition 1
Table1 Row4
Table1 Row5
Table1 Row6
Partition 2
Table1 Row7
Table1 Row8
Table1 Row9
Partition 3
DB2 Object-Database
Generally, Index is used to access a record quickly. There are two types of
Indexes: Clustering and Non-Clustering. If there is a Clustered index available for a
table, then the data rows are stored on the same page where the other records with
similar index keys are already stored on. There can be only one cluster index for a
table. Refer the diagram in the next page for better understanding of clustered and
non-clustered indexes.
Understanding: Root Page, Leaf Page and Non-Leaf page
Referring to VSAM basics, sequence set has direct pointers to control interval
and last level indexes have pointers to sequence set and higher-level indexes have
Page:119
pointers to lower level indexes and there is one root index from where the B-tree
starts.
Similarly Leaf-pages contain full keys with row-ids that points to data page. (Row id
has page number and line number). Non-leaf pages have pointers to leaf pages or
lower level non-leaf pages and the root page has pointer to first level non-leaf pages.
View of clustered index Diagram
*25 *50
*30 *42 *50
*7 *17 *25
*1 *3 *7
Records for
keys 1 3 7
Records for
keys 11 13
17
Root Page
Records for
keys 19 20
25
Non-Leaf Pages
Leaf Pages
Data Pages
View of Non-clustered index Diagram (non-leaf pages and data pages are shown)
*1 *3 *7
Records for
keys 1 3
20
Records for
key 7 11
19
Records for
keys 13 17
25
Unlike table spaces, Index spaces are automatically created when indexes are
created. Indexes are stored in index spaces.
DB2 Object - Alias & Synonym
Alias and Synonym are alternate name for a table. The main differences
between them are listed below:
SYNONYM
Private object. Only the user who
created it, can access it.
Used in local environment to hide
the high level qualifier.
When the base table is dropped,
associated synonyms are
automatically dropped.
SYSADM authority is not needed.
ALIAS
Global object. Accessible by anyone.
Used in distributed environment to hide the
location qualifier.
When base table is dropped, aliases are not
dropped.
To create an alias, we need SYSADM
Page:120
DB2 Object-View
Views provide an alternative way of looking at the data in one or more tables.
A view is a named specification of a result table. For retrieval, all views can be used
like base tables. Maximum 15 base tables can be used in a view.
Advantages of view
1.Data security: Views allows to set-up different security levels for the same base
Table. Sensitive columns can be secured from the unauthorized Ids.
2.It can be used to present additional information like derived columns.
3.It can be used to hide complex queries. Developer can create a view that results
from a complex join on multiple tables. But the user can simply query on this view
as if it is a separate base table, without knowing the complexity behind the building.
4.It can be used for domain support. Domain identifies a valid range of values that a
column can contain. This is achieved in VIEW using WITH CHECK OPTION.
Read Only Views
1.It should be derived from one base table and should not contain derived columns.
2.Views should not contain GROUP BY, HAVING and DISTINCT clauses at the
outermost level.
3.Views should not be defined over read-only views.
4.View should include all the NOT NULL columns of the base table.
DB2 Object- Catalog
It is a data dictionary for DB2, supporting and maintaining data about the
DB2 environment. It consists of 54 tables in the system database DSNDB06.
The data in DB2, about the data in the tables are updated when RUNSTATS utility
runs. The information in the DB2 catalog table can be accessed using SQL.
DB2 Object- Directory
This is second data dictionary of DB2 and used for storing detailed, technical
information about the aspects of DB2s operation. It is for DB2 internal use only.
It cannot be accessed using SQL statements. It is stored in the system database
DSNDB01.
DB2 Object- Active and Archive Logs
DB2 records all the data changes and significant events in active logs as and
when they occur. In case of failure, it uses this data to recover the lost information.
When active log is full, DB2 copies the content of active log to a DASD or magnetic
tape data set called archive log.
DB2 Object- Boot Strap Dataset
Page:121
DB2, two BSDS are created and kept in different volumes. BSDS datasets are VSAM
KSDS.
DB2 Object-Buffer Pools
Buffer Pools are virtual storage area, which DB2 uses for its temporary
purpose. There are fifty 4K buffer pools and ten 32K buffer pools in DB2.
DB2 Default buffers are BP0, BP1, BP2 and BP32.
Program Preparation
Binding process can happen in two stages, BIND PACKAGE and BIND PLAN.
One DBRM is created for one program. If the main program calls n number of
Sub-programs, then there will be n DBRMS in addition to main program DBRM.
These n + 1 DBRM can be directly feed to BIND PLAN to produce a single
PLAN or create m number of intermediate packages each formed by one or more
DBRM. This m numbers of packages are then feed to BIND PLAN step that produces
PLAN. Package is not executable but Plan is executable. To run a DB2 Program, Plan
is mandatory.
Advantages of Package:
1.When there is a change in sub program, it is enough to recompile
the subprogram and create the PACKAGE for that sub program. There is no need
for BIND PLAN.
2.Lck options and various bind options can be controlled at sub-program
level.
3.It avoids the cost of large bind.
4.It reduces Rebound time.
5.Versioning: Same package can be bounded with multiple programs with
different collection IDs.
Execution of DB2 Program
Page:122
LIB(loadlibrary)
END
/*
IGYCRCTL
COBOL COMPILER
Object Module
(SYSLIN)
IEWL
LINK EDITOR
Load Module *
(SYSLMOD)
COBOL-DB2 PROGRAM
(SYSIN)
DSNHPC
DB2 pre-compiler
COPYBOOK Library
(SYSLIB)
COMPILER Options
(PARM)
SUBROUTINE Library
(SYSLIB)
Link Edit Options
(PARM)
Pre-compiler Options
(PARM)
Database Request
Modules (DBRM)*
IKJEFT01
(BIND PACKAGE)
BIND
PARAMETERS
(SYSTSIN)
IKJEFT01
LAN
(BIND
PLAN)
PLAN *
* Time Stamp token is passed from pre-compilation stage to Load module and Plan.
When first time the plan is accessed, the time stamp of plan is verified against
the time stamp of load module. If they match, then table will be accessed using the
access path in PLAN and proceed further. If they dont match, then the program
abnormally ends with SQLCODE of 818. It means the load module and plan are out
Page:123
of sync. It usually occurs if we modify the program and pre-compile, compile and link
edit but did not bind the plan with latest DBRM produced in pre-compilation stage.
There is a modification in COBOL part. There is no change in any of the DB2
statements in the program. Is binding necessary?
Yes. It is necessary. For COBOL changes, we have to compile and link edit.
For compilation we need pre-compiled source. So new time stamp is transferred to
your load module from the pre-compilation stage. If we dont BIND it, timestamp
mismatch is obvious and the program will abnormally end.
Can this time stamp check be avoided?
If the program pre-compiled with LEVEL option, then there wont be time stamp
verification. But this option is not generally recommended.
Bind Card
BIND PACKAGE sub command is used to bind a DBRM to a package and BIND
PLAN is used to bind a DBRM to a plan or group of packages to a PLAN.
Sample Bind Card:
//BIND EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN
BIND PLAN(EMPPLAN)
MEMBER(EMPDBRM)
VALIDATE(BIND)
ISOLATION(CS)
RELEASE(COMMIT)
OWNER(SMSXL86)
LIB(SMSXL86.DB2.DBRMLIB)
/*
Bind Parameter MEMBER and LIBRARY
DBRM to be bound is given in MEMBER and the partitioned dataset containing
the DBRM is given in LIB. During BIND Package, if an existing package is used to
create the new package then, COPY should be mentioned instead of MEMBER and
LIBRARY.
COPY(collection-id.package-id)
COPYVER(version-id) version of the copy to be used. Default is empty if omitted.
Bind Parameter PKLIST
PKLIST is a BIND parameter of BIND PLAN. Packages to be connected with
PLAN are named here. PKLIST (COLL1.*) bound all the packages with collection ID
COLL1 to the PLAN.
Package Naming Collection-ID and Versioning
Packages have three qualifiers. Location ID is used in distributed
environment. Collection ID is used for grouping of packages.
Naming Syntax: Location-id. Collection-id. Package-name
Page:124
VERSION. With packages, the pre-compiler option VERSION can be used to bind
multiple versions of the programs into package.
PROG-------- > Pre-compile ----------- BIND Package --------COLL1.PROG.TEST
With VERSION(TEST)
-------- > Pre-compile ----------- BIND Package --------COLL1.PROG.PROD
With VERSION(PROD)
Bind Parameter ACTION
Package or Plan can be an addition or replacement. Default is replacement.
REPLACE will replace an existing package with the same name, location and
collection. REPLVER(version-id) will replace a specific version of an existing package.
ADD Add the new package to SYSIBM.SYSPACKAGE.
Syntax: ACTION(ADD|REPLACE)
Bind Parameter Isolation Level
The isolation level parameter of the BIND command describes to what extent a
program bound to this package can be isolated from the effects of other programs
running. This determines the duration of the page lock.
CURSOR STABILITY (CS) As the cursor moves from the record in one page to the
record in next page, the lock over the first page is released (provided the record is
not updated). It avoids concurrent updates or deletion of row that is currently
processing. It provides WRITE integrity.
REPEATABLE READ (RR) All the locks acquired are retained until commit point.
Prefer this option when your application has to retrieve the same rows several times
and cannot tolerate different data each time. It provides READ and WRITE integrity.
UNCOMMITTED READ (UR) It is also known as DIRTY READ. It can be applied only
for retrieval SQL. There are no locks during READ and so it may read the data that is
not committed. Highly dangerous and use it when concurrency is your only
requirement. It finds its great use in statistical calculation of the large table and
data-warehousing environment.
Bind Parameter ACQUIRE and RELEASE
ACQUIRE(USE) and RELEASE(COMMIT) DB2 imposes table or table space
lock when it executes an SQL statement that references a table in the table space
and it release the acquired lock on COMMIT or ROLLBACK. This is the default option
and provides greater concurrency.
ACQUIRE(USE) and RELEASE(DEALLOCATE) Locks table and table spaces on
use and releases when the plan terminates.
ACQUIRE(ALLOCATE) and RELEASE(DEALLOCATE) When DB2 allocates the
program thread, it imposes lock over all the tables and table spaces used by the
program. This option avoids deadlocks by locking your source at the beginning but it
reduces concurrency.
ACQUIRE(ALLOCATE) and RELEASE(COMMIT) is NOT ALLOWED. This
increases the frequency of deadlocks.
Bind Parameter SQLERROR
Page:125
Page:126
Host Variables
DB2 is separate subsystem. As DB2 data are stored in external address space,
you cannot directly modify or read from your program. During read, the DB2 values
are retrieved into your working storage variables. During update, the DB2 columns
are updated using your working storage variables. These working storage variables
used for retrieving or updating DB2 data should be of same size of DB2 columns.
They are called as host variables as they are defined in the host language (COBOL).
Host variables are prefixed with colon ( : ) in the embedded SQL.
DCLGEN (Declaration Generator)
DCLGEN is a DB2 utility. If we provide table details, it will generate DB2
Structure and host variable structure of the table. It can also generate NULL
indicators for the NULL columns. This output is stored in a PDS member. These
members are included in the program using the INCLUDE <pds-member>. INCLUDE
is a pre-compiler statement and so it should be coded within the scope of EXEC SQL
and END-EXEC.
DCL generated host variable names are same as DB2 column names.
As underscore is not a valid character for variable names in COBOL, it is replaced by
hyphen in host variable generation of DCLGEN. Prefix and suffix can be added to all
the variables while creating the DCLGEN copybook.
Host variables can be declared in working storage section. But they cannot be
stored in copybooks as other file layouts.
INCLUDE is expanded during pre-compilation time but COPY is expanded
during compilation and so declaration of host variable in copybook will produce errors
during pre-compilation.
DECLARE TABLE, that is table structure of DCLGEN is NOT really needed for
pre-compilation. But if it is used, then any misspelled table name, column names are
trapped in pre-compilation stage itself.
DB2 need not be up during pre-compilation. As pre-compiler just extracts the
DB2 statements and produce DBRM, it does not need DB2 connection. But DB2
region should be up for binding as the optimizer builds access path in this stage from
catalog information.
Page:127
Bytes
2
4
Int
(P/2)
DATE
PIC X(10)
Bytes
2
4
Integer
((P+Q)/2
+1)
8
Page:128
TIME
TIMESTAMP
3
10
PIC X(08)
PIC X(26)
6
26
N
N
PIC X(n)
01 WS-COLUMN.
N
N+2
yyyy-mm-dd-hh.mm.ss.nnnnnn
SQLCA
Page:129
Page:130
It performs all the normal access method functions search, retrieval, update
etc. It invokes other system components in order to perform detail functions such as
locking, logging etc.
Buffer Manager (BM):
It is responsible for transferring the data between external storage and virtual
memory. It uses sophisticated techniques such as read-ahead-buffering and lockaside-buffering to get the best performance out of the buffer pools under its control
to minimize the amount of physical i/o actually performed.
Catalog and Directory are also part of Database services component only.
SQL
SQL is fourth generation language. The definition of fourth language is Tell
the need. System will get it done for you . There is no need to instruct the method
to get it.
For example, I want list of employees working currently in Malaysia. If the
data is in file, then I have to declare, define, and Open the file. Then I should read all
the records into my working storage one by one and if their location is Malaysia, I
display them. Finally I close the file.
If the data is in DB2, I would simply write a query (SELECT * FROM table
where location=Malaysia) and DB2 optimizer do everything for me in identifying the
best access path. No worries of DECLARE, DEFINE, OPEN, READ and OMIT stuffs.
Thats the power of fourth generation language and that makes the programmer life
easy!!
SQL has three kinds of statements.
DDLData Definition Language Statements
CREATE ALTER DROP
DMLData Manipulation Language Statements SELECT INSERT UPDATE DELETE
DCLData Control Language Statements
GRANT REVOKE
DDL-CREATE
This statement is used to create DB2 objects.
General Syntax
CREATE object object-name parameters
Table creation Sample
CREATE TABLE table-name
(Column definitions,
Primary Key definition,
Foreign Key definition,
Alternate Key definition,
LIKE table-name /View-name,
Page:131
IN DATABASE-NAME.TABLESPACE-NAME)
DDL-Column Definition
Define all the columns using the syntax:
Column-name Data-type Length (NULL/NOT NULL/ NOT NULL WITH DEFAULT)
Data-types are already explained in DCLGEN section.
Column-name can be of maximum length 30. It should start with an alphabet and
it can contain numbers and underscore.
NULL means UNKNOWN, if the value is not supplied during an insertion of a row,
then NULL will be inserted into this column.
NOT NULL means, value for this column should be mentioned when inserting a
row in the table. NOT NULL with DEFAULT means, if the value for this column is
not supplied during an insertion, then based on type of the column default values
will be moved into the table. Default values are listed in the below table.
Data- Type
CHAR
VARCHAR
DATE, TIME, TIMESTAMP
INTEGER SMALLINT DECIMAL
Default Value
Spaces
Empty String (String of length 0)
Current (DATE/TIME/TIMESTAMP)
Zero
Page:132
Delete Rule
Delete rule defines what action needs to be taken on child table when there is
a deletion in primary table. The three possibilities are below:
CASCADE: Child table rows referencing parent table will also be deleted.
RESTRICT: Cannot delete any parent row when there is reference in child table.
SET NULL: SET the foreign key value as NULL when the respective primary row is
deleted.
Insert and Update rules cannot be defined. But they can be implemented using
TRIGGERS.
DDL-Alternate Key Definition
Candidate keys that are not defined as primary key are called alternate keys.
This can be defined along with column definition or separately after the definition of
all the columns using the keyword UNIQUE. Alternate key definitions CANNOT be
added later using ALTER command.
Syntax 1: COLUMN1 CHAR(10) UNIQUE NOT NULL;
Syntax 2: UNIQUE(Column1)
DDL- Adding Constraints
A check constraint allows you to specify the valid value range of a column in
the table. For example, you can use a check constraint to make sure that no salary in
the table is below Rs10000, instead of writing a routine in your application to
constrain the data.
CHECK clause and the optional clause CONSTRAINT (for named check
constraints) are used to define a check constraint on one or more columns of a table.
A check constraint can have a single predicate, or multiple predicates joined by AND
or OR. The first operand of each predicate must be a column name, the second
operand can be a column name or a constant, and the two operands must have
compatible data types.
Syntax 1: ADD CHECK (WORKDEPT BETWEEN 1 and 100);
Syntax 2: ADD CONSTRAINT BONUSCHK CHECK (BONUS <= SALARY);
Although CHECK IS NOT NULL is functionally equivalent to NOT NULL, it
wastes space and is not useful as the only content of a check constraint. However,
later if you want to remove the restriction that the data be Non-null, you must define
the restriction using the CHECK IS NOT NULL Clause.
DDL-Index creation
Table spaces should be created before creating the table. But index spaces
are automatically created with creation of index. So create index command provides
optional parameters of USING STOGROUP, PRIQTY, SECQTY, FREEPAGE and PCTFREE
etc. for the allocation of index space. But usually we dont mention these parameters
and allow the system to allocate it in the right place with respect to table space.
CREATE [UNIQUE] INDEX index-name on Table-name (column-name [asc/desc],..)
Page:133
Page:134
Page:135
LIKE DSN8410.DEPT;
CREATE UNIQUE INDEX YDEPTX
ON YDEPT (DEPTNO);
ALTER TABLE YDEPT
PRIMARY KEY(DEPTNO);
Using ALTER statement, alternate keys cannot be defined. Other than that we
can add columns, constraints, checks & DROP primary key.
When a new column is added with NOT NULL WITH DEFAULT, then the value for
the column for all the rows already exist in the table is filled based on the type of
column:
1. Numeric
Zero
2. Char, Varchar
-Spaces
3. Date
-01/01/0001
4. Time
-00:00 AM
Using ALTER command, we can extend the length of VARCHAR/CHAR items, switch
the data-type of a column within character data-types (CHAR/VARCHAR);numeric
data types(SMALLINT,INTEGET,REAL,FLOAT,DOUBLE,DECIMAL) and within graphical
datatypes(GRAPHIC,VARGRAPHIC).
ALTER TABLE DSN8410.EMP
ALTER COLUMN WORKDEPT
SET DATATYPE CHAR(6);
DDL-DROP
DROP deletes the table definition in addition to all the rows in it. You also lose
all synonyms, views, indexes, referential and check constraints associated with that
table. Plans and Packages are deleted using the command FREE.
DROP Object Object-name.
DROP TABLE table-name
FREE PACKAGE(COLL1.*) - Deletes all the packages belongs to collection COLL1.
Data Manipulation Language statements
DML-SELECT
SELECT statement is the heart of all queries and it is specifically used for
retrieving information. As a primary statement to retrieve the data, it is the most
complex and most complicated DML statement used. Six different clauses can be
used with a SELECT statement and each of these clauses has its own set of
predicates. They are FROM-WHERE-GROUPBY-HAVING-UNION and ORDER BY.
Syntax: SELECT columns FROM tables
WHERE conditions
GROUP BY columns
HAVING conditions
ORDERBY columns
Up to 750 columns can be selected. 15 sub-queries can be coded in addition to one
main query.
Page:136
Page:137
If a column you specify in the GROUP BY clause contains null values, DB2
considers those null values to be equal. Thus, all nulls form a single group.
You can also group the rows by the values of more than one column
DML-SELECT-HAVING (Subjecting Group to conditions)
Use HAVING to specify a search condition that each retrieved group must
satisfy. The HAVING clause acts like a WHERE clause for groups, and contain the
same kind of search conditions you specify in a WHERE clause.
The search condition in the HAVING clause tests properties of each group
rather than properties of individual rows in the group.
DML-SELECT-UNION and UNION ALL
Using the UNION keyword, you can combine two or more SELECT statements
to form a single result table. When DB2 encounters the UNION keyword, it processes
each SELECT statement to form an interim result table, and then combines the
interim result table of each statement. If you use UNION to combine two columns
with the same name, the result table inherits that name.
You can use UNION to eliminate duplicates when merging lists of values
obtained from several tables whereas UNION ALL retains duplicates.
DML-SELECT-ORDER BY
The ORDER BY clause is used to sort and order the rows of data in a result
dataset by the values contained in the column(s) specified. Default is ascending
order (ASC). If the keyword DESC follows the column name, then descending order
is used. Integer can also be used in place of column name in the ORDER BY clause.
SELECT COL1,COL2,COL3 FROM TAB1 ORDER BY COL1 ASC COL3 DESC.
SELECT COL1,COL2,COL3 FROM TAB1 ORDER BY 1 ASC 3 DESC.
DML-SELECT-CONCAT
CONCAT is used for concatenating two columns with the string you want in between.
To concatenate the last name and first name with comma in between,
SELECT LASTNAME CONCAT ',' CONCAT FIRSTNME
FROM DSN8410.EMP;
COLUMN FUNCTION AND SCALAR FUNCTION
Column function receives a set of values from group of rows and produces a single
value. SCALAR function receives one value and produces one value.
COLUMN FUNCTION:
Function
MAX
MIN
AVG
SUM
COUNT
Example
MAX(SALARY)
MIN(SALARY)
AVG(SALARY)
SUM(SALARY)
COUNT(*)
Page:138
1. DISTINCT can be used with the SUM, AVG, and COUNT functions. The selected
function operates on only the unique values in a column.
2. SUM and AVG cannot be used for non-numeric data types whereas MIN, MAX and
COUNT can.
SCALAR FUNCTION:
Function
CHAR
DATE
DAY
DAYS
MONTH
YEAR
DECIMAL
FLOAT
HEX
INTEGER
HOUR
MINUTE
SECOND
MICROSECOND
TIME
TIMSTAMP
LENGTH
SUBSTR
VALUE
Example
CHAR(HIREDATE)
DATE(1977-12-16)
DAY(DATE1-DATE2)
DAYS(1977-12-16)
-DAYS(1979-11-02) + 1
MONTH(BIRTHDATE) = 16
YEAR(BIRTHDATE)=1977
DECIMAL(AVG(SALARY),8,2)
FLOAT(SALARY)/COMM
HEX(BCHARCOL)
INTEGER(AVG(SALARY)+0.5)
HOUR(TIMECOL) > 12
MINUTE(TIMECOL) > 59
SECOND(TIMECOL) > 59
MICROSECOND(TIMECOL) > 12
TIME(TSTMPCOL) < 12:00:00
TIMESTAMP(DATECOL,TIMECOL)
LENGTH(ADDRESS)
SIBSTR(FSTNAME,1,3)
VALUE(SMALLINT1,100) +
SMALLINT2 > 1000
CURSOR
CURSOR is useful when more than one row of a table to be processed. It can
be loosely compared with sequential read of file. Usage of cursor involves four steps.
1.DECLARE statement.
This statement DECLARES the cursor. This is not an executable statement.
Syntax: DECLARE cursor-name CURSOR [WITH HOLD] FOR your-query
[FOR UPDATE OF column1 column2 | FOR FETCH ONLY]
2.OPEN statement.
This statement just readies the cursor for retrieval. If the query has ORDER
BY or GROUP BY clause, then result table is built. However it does not assign values
to the host variables.
Page:139
DML-INSERT
INSERT statement is used to insert rows to the table. NOT NULL Column
values should be supplied during INSERT. Otherwise INSERT would fail.
1.Specify values for columns of single row to insert.
INSERT INTO YDEPT (DEPTNO, DEPTNAME, MGRNO, ADMRDEPT, LOCATION)
VALUES ('E31', 'DOCUMENTATION', '000010', 'E01', ' ');
INSERT INTO YEMP
VALUES ('000400', 'RUTHERFORD', 'B', 'HAYES', 'E31',
'5678', '1983-01-01', 'MANAGER', 16, 'M', '1943-07-10', 24000,
500, 1900);
Page:140
You can name all the columns or omit. When you list the column names, you
must specify their corresponding values in the same order as in the list of column
names.
2.Mass Insert. Another table or view contains the data for the new row(s).
INSERT INTO TELE
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8410.EMP
WHERE WORKDEPT = 'D21';
If the INSERT statement is successful, SQLERRD(3) is set to the number of
rows inserted.
Dependant table insertion: (Table with foreign Key)
Each non-null value you insert into a foreign key column must be equal to
some value in the primary key (the primary key is in the parent table). If any field in
the foreign key is null, the entire foreign key is considered null. If you drop the index
enforcing the primary key of the parent table, an INSERT into either the parent table
or dependent table fails.
DML-UPDATE
UPDATE statement is used to modify the data in a table. The SET clause
names the columns you want to update and provide the values you want them
changed to. The condition in the WHERE clause locates the row(s) to be updated. If
you omit the WHERE clause, DB2 updates every row in the table or view with the
values you supply.
If DB2 finds an error while executing your UPDATE statement (for instance, an
update value that is too large for the column), it stops updating and returns error
codes in the SQLCODE and SQLSTATE host variables and related fields in the SQLCA.
No rows in the table change (rows already changed, if any, are restored to their
previous values). If the UPDATE statement is successful, SQLERRD(3) is set to the
number of rows updated.
Syntax:
UPDATE table-name SET column1 =value1, column2 =value2 [WHERE condition];
DML-DELETE
DELETE statement is used to remove entire rows from a table. The DELETE
statement removes zero or more rows of a table, depending on how many rows
satisfy the search condition you specified in the WHERE clause. If you omit a WHERE
clause from a DELETE statement, DB2 removes all the rows from the table or view
you have named. The DELETE statement does not remove specific columns from the
row. SQLERRD(3) in the SQLCA contains the number of deleted rows.
Syntax: DELETE FROM table-name [WHERE CONDITION]
DML NULLS
One of the 12 CODD Rules for relation database system is NULL values are
supported for representing missing information in a systematic way irrespective of
the data type. DB2 supports NULL values.
NULL IN SELECT STATEMENT
NULL is defined as Unknown value in RDBMS terminology.
Two unknown values cannot be same. As EQUAL can be used only for known
values, COLUMN1=NULL is meaningless. In the where predicate if NULL needs to be
Page:141
checked, WHERE COLUMN1 IS NULL is the right way of coding. If GROUPBY is done
on a NULL column, then all the columns whose value is unknown (NULL) forms one
GROUP.
Example, If QUALIFICATION is a column defined with NULL attribute in
EMPTABLE, then SQL for retrieving all the employees whose QUALIFICATION is not
yet known is:
SELECT EMPNAME,QUALIFICATION FROM EMPTABLE
WHERE QUALIFICATION IS NULL.
High-level languages dont have any NULL concept. So if the column is NULL,
then the host variable corresponds to that column should be set to zero (numeric) or
spaces (non-numeric). This can be done in the programming. But for doing this we
should know whether the column is NULL or NOT. NULL indicators are used for this
purpose. NULL indicator is a 2 byte field (S9(4) COMP). Negative value,1 in NULL
indicator field indicate that the column is NULL.
EXEC SQL
SELECT QUALIFICATION
INTO :WS-QUALIFICATION :WS-QUALIFICATION-NULL
FROM EMPTABLE
WHERE EMPID=2052
END-EXEC
IF SQLCODE = 0
PERFORM 1000-CHECK-FOR-NULLS
.
100-CHECK-FOR-NULLS.
IF WS-QUALIFICATION-NULL < 0 THEN
MOVE SPACES TO WS-QUALIFICATION
END-IF.
If QUALIFICATION of EMPID = 2052 is NULL and if you didnt code null
indicator, then the query will fail with SQLCODE 305 and the error message
corresponding to this code is THE NULL VALUE CANNOT BE ASSIGNED TO OUTPUT
HOST VARIABLE NUMER posit-num BECAUSE NO INDICATOR VARIABLE IS
SPECIFIED
This kind of NULL check (100-CHECK-FOR-NULLS) is highly recommended for
numeric fields. Otherwise the program may abnormally end at some place down the
line, when the field is used for some arithmetic operation.
Instead of NULL indicator, VALUE function can also be used.
SELECT VALUE(QUALIFICATION, )
INTO :WS-QUALIFICATION
FROM EMPTABLE
WHERE EMPID=2052
If QUALIFICATION is NULL, then will be moved to WS-QUALIFICATION.
NULL IN INSERT/UPDATE STATEMENT
The concept is same. DB2 informs the NULL presence thru the NULL indicator
to the programming language. In similar way, the programming language should
inform the NULL to DB2 by NULL indicator.
Just before INSERT or UPDATE move 1 to NULL indicator variable and use
this variable along with host variable for the column to be inserted/updated.
Page:142
Once 1 is moved to null indicator, then independent of value presence in the host
variable, NULL will be loaded. In the following query, though B.E is moved to host
variable, it will not be loaded into the table as null indicator has 1. It should have
been set to 0 before loading.
MOVE 1 TO WS-QUALIFICATION-NULL.
MOVE B.E TO WS-QUALIFICATION.
EXEC SQL
UPDATE EMPTABLE
SET QUALIFICATION = :WS-QUALIFICATION :WS-QUALIFICATION-NULL
WHERE EMPID=2052
END-EXEC.
It is common feeling that NULL provides more trouble than benefit. So it is
always better to specify NOT NULL or NOT NULL WITH DEFAULT for all the columns.
Values in null indicator
A negative value in null indicator field implies that the column is NULL and a
positive value or ZERO in null indicator implies that the column is NOT NULL. Value of
2 in null indicator indicates that the column has been set to null as a result of a
data conversion error.
DCL-GRANT and REVOKE commands
Data Control Language consists of commands that control the user access to
the database objects. The Database Administrator (DBA) has the power to give
(GRANT) and take (REVOKE) privileges to a specific user, thus giving or denying
access to the data.
Syntax:
GRANT access ON object TO USER-ID | PUBLIC [WITH GRANT OPTION].
REVOKE access ON object FROM USER-ID | PUBLIC.
Example:
GRANT SELECT, UPDATE ON EMPTABLE TO SMSXL86.
GRANT ALL ON EMPTABLE TO SMSXL86.
REVOKE CREATE TABLE, CREATE VIEW FROM SMSXL86.
REVOKE INSERT ON EMTABLE FROM SMSXL86.
Page:143
All the changes made to the database since the initiation of transaction, are
made permanent by COMMIT. ROLLBACK brings back the database to the last
committed point.
Syntax:
EXEC SQL COMMIT END-EXEC
EXEC SQL ROLLBACK END-EXEC
DB2 Restart Logic:
Usually there is only one COMMIT or ROLLBACK just before the termination of
the transaction. But it is not preferred always.
If the program is updating one million records in the table space and it
abnormally ends after processing ninety thousand records, then the issued
ROLLBACK brought the database back to the point of transaction initiation.
The restart of the program must have to process ninety thousand successful-but-notcommitted updates once again. The repeated cost occurred is due to Bad design of
the application.
If the program is expected to do huge updates, then commit frequency has to
be chosen properly. Let us say, after careful analysis, we have designed our
application COMMIT frequency as thousand records. If the program abnormally ends
while processing 1500th record, then the restart should not start from first record but
from 1001st record. This is done using restart logic.
Create one temporary table called RESTARTS with a dummy record and
inserts one record into the table for every commit with key and occurrence of
commit. This insertion should happen, just BEFORE the issue of COMMIT.
First paragraph of the procedure should read the last record of the table and
skipped the records that are already processed and committed (1000 in the previous
case). After the processing of all the records (one million), delete the entries in the
table and issue one final COMMIT.
JOINS
Most of the times, the complete information about an entity is retrieved from
more than one table. Retrieving information from more than one table can be done
Page:144
in two ways. One method is JOIN and the other method is UNION. We have already
discussed about UNION. It is used to get information about more entities. In other
words, it returns more rows. JOIN is used to get detail information about entities. In
other words, it returns more columns.
There are two kinds of JOIN. Inner join returns rows from two or more tables
based on matching values. Outer join returns rows from tow or more tables
independent of matching or non-matching.
JOIN: INNER
Two or more tables are joined together using the column having equal values
among them.
EMPTABLE
EMP_ID EMP_NAME DESIG
100MUTHU
TL
101DEVI
SSE
DESIG
SALARY
SSE
400000
SE
300000
SALTABLE
Page:145
DEVI
400000
JOIN:RIGHT OUTER JOIN
The clause RIGHT OUTER JOIN includes rows from the table named after it
where the values in the joined columns are not matched by values in the
joined columns of the table named before it.
SELECT EMP-NAME SALARY FROM EMPTABLE RIGHT OUTER JOIN SALTABLE ON
EMPTABLE.DESIGNATION = SALTABLE.DESIGNATION
Result
DEVI
----
400000
300000
Page:146
To use an index:
1. One of the predicates for the SQL statement must be index-able.
2. One of the columns (in any index-able predicate) must exist as column in an
available index.
There
1.
2.
3.
4.
5.
6.
Example: The primary key of the table TASKTAB is a composite key with columns
CLIENT_ID, PROJECT_ID, MODULE_ID.
Direct Index Look-Up:
Values are provided for each column in the index. In the above example, if
the value for all the three columns are given in the where clause, then there will be
a direct index look up.
SELECT MODULE_ID, MODULE_STATUS FROM TASKTAB
WHERE CLIENT_ID = 100 AND PROJECT_ID=10 AND MODULE_ID = 1
If only CLIENT_ID and MODULE_ID is given then direct index look-up is not possible.
Index scan would occur.
Matching Index Scan or Absolute Positioning
It starts with root page and work down to a leaf page in much the same
manner as direct index loop up does. However since the full key is unavailable, DB2
must scan the leaf pages using the values it does have, until all matching values
have been retrieved. As it starts at root page, the value of first column(s) should be
given.
SELECT MODULE_ID, MODULE_STATUS FROM TASKTAB
WHERE CLIENT_ID = 100 AND PROJECT_ID=10
The query use matching index scan and returns all the module id and module
status for the given client and project.
Non-matching index scan or Relative positioning
Non-matching index begins with the first leaf page in the index and
sequentially scans subsequent leaf pages applying the available predicates. If any
ORDER BY or GROUP BY is coded, then optimizer prefers non-matching index scan
than table space scan.
SELECT CLIENT_ID, MODULE_ID, MODULE_STATUS FROM TASKTAB
WHERE PROJECT_ID=10
ORDER BY MODULE_ID
Index-only access
If all the data required by the query is available in index, then DB2 can avoid
reading the data page and perform non-matching index scan.
This special type of non-matching index scan is called index-only access.
Page:147
Page:148
Meaning
Integer value assigned by the user issuing the EXPLAIN
or DB2
Integer identifying level of sub-query or union in a SQL
statement. The first sub-select is numbered as 1, the
second as 2 and so on.
Name of the PLAN or PACKAGE based on BIND stage of
static SQL. It will be spaces for SPUFI request.
Name of the program in which SQL statement is
embedded. DSQIESQL for SPUFI EXPLAIN
PLANNO
METHOD
ACCESSTYPE
MATCHCOLS
INDEXONLY
PREFETCH
PARALLELISM MODE
QBLOCK_TYPE
JOIN_TYPE
TIMESTAMP
Page:149
Page:150
DYNAMIC SQL
Static SQL is embedded in application program and the only values that can
change are the values of the host variables in the predicates. Dynamic SQL are
characterized by the capability to change columns, tables and predicates during a
programs execution.
Advantages: Flexibility and best access path (as the bind happens at the run
time using latest RUNSTATS information)
Disadvantage: Slow as the runtime is bind time + execution time.
Types of Dynamic SQL
1.EXECUTE IMMEDIATE:
Move the SQL statement to be executed into the host variable and issue
execute immediate command.
01 WS-HOST.
49 WS-HOST-LENGTH PIC S9(04) COMP.
49 WS-HOST-VAR
PIC X(40).
MOVE +40 TO WS-HOST-LENGTH
MOVE SET EMP_NAME = MUTHU WHERE EMP_NO=2052 TO WS-HOST-TEXT.
EXEC SQL EXECUTE IMMEDIATE: WS-HOST-VAR END-EXEC.
Disadvantages:
1.It cannot be used for SELECT.
2.Executable form of the SQL will be deleted once it is executed.
2.EXECUTE WITH PREPARE:
This is equivalent to first form but here the SQL is prepared before execution
and so the second disadvantage is avoided in this method.
Form: 1
MOVE +40 TO WS-HOST-LENGTH
MOVE SET EMP_NAME = MUTHU WHERE EMP_NO=2052
EXEC SQL PREPARE RUNFORM FROM :WS-HOST END-EXEC.
EXEC SQL EXECUTE RUNFORM END-EXEC.
TO WS-HOST-TEXT.
Form: 2
Parameter markers can be used in place of constant values. This acts like
placeholder for the host variables in the SQL.
MOVE +40 TO WS-HOST-LENGTH
MOVE SET EMP_NAME = MUTHU WHERE EMP_NO=?
TO WS-HOST-TEXT.
EXEC SQL PREPARE RUNFORM FROM :WS-HOST END-EXEC.
MOVE 2052 TO WS-EMP-NUM
EXEC SQL EXECUTE RUNFORM USING :WS-EMP-NUM END-EXEC.
Disadvantage:
Select statement is not supported.
Page:151
Page:152
Transaction-A
Read row R1.
Transaction-B
Read row R1.
Page:153
Locking
Locking solves the concurrency issues described above. Locking prevent
another transaction to access the data that is being changed by the current
transaction. Three main attributes of lock are mode, size and duration.
Mode of the lock:
It specifies what access to the locked object is permitted to the lock owner
and to any concurrent application process.
Exclusive (X) The lock owner can read or change the locked page. Concurrent
processes cannot acquire ANY lock on the page nor they access the locked page.
Update (U) The lock owner can read but cannot change the locked page. However
the owner can promote the lock to X and update. Processes concurrent with the U
lock can acquire S lock and read the page but another U lock is not possible.
Share (S) - The lock owner and any concurrent processes can read but not change
the locked page. Concurrent processes can acquire S or U locks on the page.
The above three are with respect to pages. Three more kind of locks are possible at
table /table space level. They are: Intent Share (IS), Intent Exclusive (IX) and
Share/Intent Exclusive (SIX)
SIZE of the lock
It can be ROW, PAGE, TABLE and TABLESIZE.
Duration of the lock
It is the length of time the lock is held. It varies according to when the lock is
acquired and released. This is controlled by ACQUIRE, RELEASE and ISOLATION
LEVEL parameters of BIND.
LOCK Issues
Locking is introduced to avoid concurrency issues. Locking resulted suspension,
timeout and deadlock problems. But LOCK is needed for data integrity.
Suspension.
IRLM suspends an application process if it requests a lock on an object that is
already owned by another application process and cannot be shared. The suspended
process is said to be in Wait Stage for the lock. The process resumes when the lock
is available.
Timeout.
Page:154
When the object is in wait stage more than predefined time, then the process
(program) is terminated by DB2 with SQL return code 911 or 913 meant for
TIMEOUT. The IRLMRWT of DSNZPARM determines the duration of the wait time.
Deadlock.
When two or more transactions are in simultaneous wait stage, each waiting
for one of others to release a lock before it can proceed, DEADLOCK occurs.
If deadlock occurs, transaction manager of DB2 breaks it by terminating one
of the transactions and allows the other to process. Later, the application
programmer will restart the terminated transaction.
Lock and Latch
A true lock is handled by DB2 using an external component called IRLM.
However, when it is practical, DB2 tries to lock pages without going to IRLM.
This type of lock is known as LATCH. Latches are internally set by DB2 without going
to IRLM.
Initially latches are used to lock only DB2 index pages and internal DB2
resources. DB2 V3 uses latching more frequently for data pages also. Latches are
preferred when a resource serialization is required for a short time. Locks and
Latches guarantee data integrity at the same level.
Advantage of Latch over Lock:
1.Cross-memory service calls to IRLM is eliminated.
2.Latch requires about one third the number of instructions as a lock.
Lock Promotion
When lock is promoted from lower level to upper level, it is called lock
promotion. DB2 acquire U lock to update a row. But at the same concurrent
processes can access the row in S lock. During update, it needs X lock and to get
the X lock, it waits for concurrent processes to release S locks. It promotes the
current lock from U to X and this is called LOCK PROMOTION ON MODE. (U
Update in Pending and X Update in Progress)
When the number of page locks exceeds predefined limit, it is promoted to
table or table space level and this is LOCK PROMOTION ON SIZE.
Lock Strategy
DB2 decides the lock to be applied based on the following:
1. Lock size declared at the time of table space creation.
2. Type of SQL statement SELECT or UPDATE
3. Explicit LOCK statement in the program.
4. ACQUIRE and RELEASE option chosen by user at the BIND time.
5. Isolation level specified at bind time.
6. The NUMLKTS and NULLKUS limit.
Page:155
Normalization
Data Normalization describes the process of designing a database and
organizing data to take best advantage of relational database principles. It is the
process of putting one fact in one appropriate place. This optimizes the updates at
the expense of retrievals.
When a fact is stored in only one place, retrieving many different but related
facts usually requires going to many different places. This tends to slow the retrieval
process but update is easy and faster as the fact you are updating exists in only one
place.
The process involves five levels. Tables are usually normalized till level three.
Un normalized Data
Page:156
The values in a row are dependant on the key, the whole key, and nothing
but the key, so help me CODD
In the fourth level, multi-valued dependencies are removed and in the fifth level,
remaining anomalies are removed.
De-normalization
De-normalization is the process of putting one fact in more than one place. It
is the reverse process of normalization. This will speed up the retrieval process at the
expense of data modification. De-normalization is not a bad decision when a
completely normalized design is not performing optimally.
Page:157
6. Information about Table space/ Table and Indexes are stored in the following
Tables.
SYSTABLESPACE
SYSTABLES
SYSCOLUMNS
SYSRELS
SYSINDEXES
SYSTABLEPART
SYSCOLUMNS
SYSTBAUTH
SYSCOLAUTH
SYSFIELDS
SYSLINKS
SYSINDEXPART
SYSCOLDIST
SYSFOREIGNKEYS
SYSKEYS
SYSCOLDISTSTATS
SYSTABSTATS
SYSINEXSTATS
Triggers (Available in version 6)
Trigger is a piece of code that is executed in response to a data modification
statement. (insert/update/delete). To be a bit more precise: triggers are eventdriven specialized procedures that are stored in, and managed by the RDBMS. Each
trigger is attached to a single, specified table. Triggers can be thought of as an
advanced form of "rule" or "constraint" written using an extended form of SQL.
Therefore triggers are automatic, implicit, and non-bypass able.
Nested Triggers
A trigger can also contain insert, update, and delete logic within itself.
Therefore, a trigger is fired by a data modification, but can also cause another data
modification, thereby firing yet another trigger. When a trigger contains insert,
update, and/or delete logic, the trigger is said to be a nested trigger. Most DBMSs,
however, place a limit on the number of nested triggers that can be executed within
a single firing event. If this were not done, it could be quite possible to having
triggers firing triggers ad infinitum until all of the data was removed from an entire
database!
Triggers cannot be attached to the following tables:
A System Catalog Table, PLAN_TABLE, STATEMENT_TABLE, DSN_FUNCTION_TABLE
View, Alias, Synonym, Any table with a three-part name
Trigger in place of RI
Triggers can be coded, in lieu of declarative RI, to support ALL of the RI rules.
We can specify only ON DELETE rules in the DDL statement. UPDATE and INSERT
rules of RI can be implemented using triggers.
Sample Trigger:
CREATE TRIGGER SALARY_UPDATE
BEFORE UPDATE OF SALARY
ON EMP
FOR EACH ROW MODE DB2SQL
Page:158
LOAD
Purpose
It is used to create image copy backup dataset for a complete
table space or a single partition of a table space. It can be of
type FULL or incremental. Based on number of modified pages
after the previous backup, prefer FULL or incremental image
copy. Successful copy details are loaded in SYSIBM.SYSCOPY.
It is used to create full image copy from incremental image
copies.
It is used to record a point of consistency for related
application or system tables. Usually done before image copy.
It is used to restore DB2 table space and indexes to a specific
point in time. It uses image copy and DB2 logs information to
roll back the table space content to the old one. RECOVER
INDEX generated new index data from current table space
data. It is actually regeneration than restoring. So it is
followed by RECOVERY TABLESPACE, which is restoring.
It is used to accomplish bulk inserts to DB2 tables. It can add
Page:159
(Data
Organization)
REORG
(Data
Organization)
CHECK, REPAIR,
REPORT,
DIAGNOSE
RUNSTATS
(Catalog
Manipulation
Utility)
25
XXXXXXXXXXXXXXXXXXXX
YYYYYYYYYYY
50
XXXXX
YYYYY
LOAD Data statement describes the data to be loaded. In the above example,
SYSREC is coded and it should be one of the DDNAME in JCL, which points to actual
dataset.
Data are loaded in the order they appear in the dataset. Automatic data conversion
between compatible data types is done if needed.
What are COPY PENDING and CHECK PENDING status?
COPY PENDING and CHECK PENDING status on the table space restricts any
updates on table space.
If you LOAD or REORG the table space with the option LOG NO, then the table
space get into COPY PENDING status. The meaning is, an image copy is needed on
the table space.
Page:160
If you LOAD the table space with ENFORCE NO option, then the table space
get into CHECK PENDING status. The meaning is table space is loaded without
enforcing constraints. CHECK utility needs to be run on the table space.
COPY Utility
COPY TABLESPACE EDSDB.TRG007TS
FULL YES |NO
YES FOR FULL IMAGE and NO FOR INCREMENTAL
SHRLEVEL CHANGE
Page:161
MERGECOPY Utility
MERGECOPY TABLESPACE EDSDB.TRG007TS
NEWCOPY (YES/ NO)
YES. Merging all the incremental copies with the available full image copy and
creates a new image copy.
NO. Merging all the incremental copies and prepares a single incremental image
copy.
REPAIR Utility
//DSNUPROC.SYSIN *
REPAIR OBJECT LOG YES SET TABLESPACE NPCBTC00.TASSEM01 NOCOPYPEND
/*
This card repairs the table space NPCBTC00.TASSEM01 and clears the copy pending
status on the table space.
RUNSTATS Utility
//SYSIN DD *
RUNSTATS TABLESPACE NPCBTC00.TASSEM01
TABLE (NPCBT9TS.TASTEM01)
COLUMN (NPA, TELNO, CLS_MO, TR_CRT_MO, TR_CRT_YY,
SVC1_TYP, RPT1_TYP, SRC1_TYP, CLR_MO, CLR_DY)
INDEX (ALL)
SHRLEVEL CHANGE
REPORT YES UPDATE ALL
/*
REBUILD/RECOVER/REORG and CHECK DATA Utility
REBUILD INDEX (ALL) TABLESPACE(NPCBT9DB.NPCS7011)
SORTKEYS
RECOVER TABLESPACE NPCTT9DB.NPCS9042
REORG TABLESPACE NPCTT9DB.NPCS7048
UNLDDN SYSRE
SORTDEVT SYSDA
SORTNUM 4
WORKDDN(SYSUT1,SRTOUT)
CHECK DATA TABLESPACE NPCTT9DB.NPCS7013
SORTDEVT SYSDA
SORTNUM 4
WORKDDN(SYSUT1,SRTOUT)
QUIESCE Card:
QUIESCE
TABLESPACE NPCBT900.NPCS1002
TABLESPACE NPCBT900.NPCS1004
Page:162
START Database:
DSN SYSTEM(DB2X)
START DATABASE(NPCBT900) SPACENAM(NPCS1001) ACCESS(RO)
START DATABASE(NPCBT900) SPACENAM(NPCS1002) ACCESS(RO)
END
SQL Guidelines for better performance
1.
2.
3.
4.
5.
Page:163
AND
AND
AND
AND
AND
Page:164
A.IXCREATOR = B.CREATOR
A.IXCREATOR = B.TBCREATOR
B.TBNAME = <TAB NAME>
B.TBCREATOR = <TAB CREATOR>
B.UNIQUERULE = P
Page:165
Page:166
-DIS DB(DSNDB01)
Displays all the components in the Database DSNDB01 and their status.
-DIS DB(DSNDB01) SPACENUM(xxx)
Displays information about specific table space/index space(xxx) of database
-STA DB(DSNDB01)
Starts the database DSNDB01
-STA DB(DSNDB01) SPACENUM(xxx)
Start a specific tables pace/index space of a database.
__________________________________________________________________________________
|
|
| DSNESP01
SPUFI
SSID: DSN
|
| ===>
|
| Enter the input data set name:
(Can be sequential or partitioned)
|
|
1 DATA SET NAME..... ===> SMSXL86.DB2.SPUFI.INPUT(MEM1)
|
|
2 VOLUME SERIAL..... ===>
(Enter if not cataloged)
|
|
3 DATA SET PASSWORD. ===>
(Enter if password protected)
|
|
|
| Enter the output data set name:
(Must be a sequential data set)
|
|
4 DATA SET NAME..... ===> SMSXL86.DB2.SPUFI.OUTPUT
|
|
|
| Specify processing options:
|
|
5 CHANGE DEFAULTS... ===> Y
(Y/N - Display SPUFI defaults panel?)
|
|
6 EDIT INPUT........ ===> Y
(Y/N - Enter SQL statements?)
|
|
7 EXECUTE........... ===> Y
(Y/N - Execute SQL statements?)
|
|
8 AUTOCOMMIT........ ===> Y
(Y/N - Commit after successful run?)
|
|
9 BROWSE OUTPUT..... ===> Y
(Y/N - Browse output data set?)
|
|
|
| For remote SQL processing:
|
| 10 CONNECT LOCATION
===>
|
|
|
|
|
| PRESS: ENTER to process
END to exit
HELP for more information
|
|
|
|___________________________________________________________________________________
Page:167
Page:168
//*********************************************************************
//* INSTREAM PROCEDURE FOR COMPILATION LINKEDIT STEP
//*********************************************************************
//LKED EXEC PGM=IEWL,PARM='XREF',
//
COND=((4,LT,COBCOMP),(4,LT,PRECOMP))
//SYSLIB DD DSN=SYS2.COB2LIB,DISP=SHR
//
DD DISP=SHR,DSN=D2TB.DSNLOAD
//SYSLIN DD DSN=&&LOADSET,DISP=(OLD,DELETE)
//SYSLMOD DD DISP=OLD,DSN=&USER..TSTINDIA.LOADS(&MEM)
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSUT1 DD SPACE=(1024,(50,50)),UNIT=SYSDA
//
PEND
//*
//*********************************************************************
Page:169
//*
//STEP1
EXEC PGM=IKJEFT01,DYNAMNBR=20,COND=(4,LT)
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSNP)
BIND PLAN(SOCBATCH)
PKLIST(SOCCOLL.*)
ACTION(REPLACE) +
RETAIN
ACQUIRE(USE)
RELEASE(COMMIT) +
ISOLATION(CS)
VALIDATE(BIND)
NODEFER(PREPARE) +
CACHESIZE(0)
END
//*
//JS01
EXEC PGM=IKJEFT01,DYNAMNBR=20,COND=(4,LT)
//STEPLIB DD DSN=D2TB.DSNLOAD,DISP=SHR
//SYSTSIN
DD *
DSN SYSTEM(DSNP)
RUN PROGRAM(SOC02P01) PLAN(SOCBATCH) LIB('DP.GOLIB')
END
//
Page:170
Page:171
Page:172
Function
The transactions and the main program associates with the transaction
should be registered in Program table. Task control Program (KCP)
refers PCT.
All the CICS programs and Maps have to be registered in Processing
Program Table. Program Control Program (PCP) refers PPT.
All the VSAM files used in the CICS programs has to be registered in
File Control Table. File Control Program (FCP) refers FCT.
Transient data queues should be predefined in Destination Control
Table. Transient Data Program refers DCT.
If you want to recover Temporary storage queues during system crash,
then they should be registered in Temporary Storage Table.
If any DB2 commands are used in the program, then the PLAN should
be registered here.
User ID and Password should be registered in Sign-On-Table.
All the terminals should be registered in Terminal Control Table.
All the programs that need to be automatically started during CICS
start up and Shut down should be listed in Program List Table.
Control information of system logs and journal files is stored in Journal
Control Table. Journal Control Program refers to JCT.
Page:173
I/O statements
SORT statement
VS COBOL 2 allows STOP RUN and that returns control back to CICS.
MAP DESIGN
Before get into program design, let us see how maps (screens) are designed
in CICS. Most of the installations use tools like SDF for screen designing. The tools
generate BMS macros for the designed screen. We brief the BMS macros involved in
the map design. BMS is acronym for Basic Mapping Support.
MAP and MAPSET
A screen designed thru BMS is called MAP. One or a group of maps makes up
a MAPSET.
PHYSICAL MAP AND SYMBOLIC MAP
Physical maps control the screen alignment, sending and receiving of
CONSTANTS and data to and from a terminal. They are coded using BMS macros,
assembled and link edited into CICS LOAD LIBRARY. They ensure the device
independence in application programs.
Symbolic maps define the map fields used to store the VARIABLE data
referenced in COBOL program. They are also coded used BMS Macros. But after
assembling, they are placed in a COPY library and then copied into CICS programs
using COPY statement. They ensure device and format independence to the
application programs.
BMS Macro Coding Sheet
Since BMS map definitions are purely assembler macros, the following coding
convention must me maintained.
1
10
16
(1-8) Label (10-15) Macro name
(16-71) Operands
72
72-Continuation.
Page:174
IN for input maps like order entry screens and OUT for output maps like
display screens and INOUT for input-output maps like update screens.
LANG.
Page:175
It has two arguments that decided the position of the field. The two
arguments are line and column. It is the position where the attribute byte of the field
starts.
LENGTH.
The length of the field is coded here. It excludes the attribute character.
ATTRIB.
All the input and output fields are prefixed by one byte attribute field that defines
the attributes of the field. Some of the attributes are:
1. ASKIP/PROT/UNPROT
Mutually exclusive parameters that define the
type of the field. UNPROT is coded for input and input-output fields. PROT is
coded for output and stopper fields. ASKIP is coded for screen literals and
skipper fields. The cursor automatically skipped to next field and so you
cannot enter data into skipper field.
2. NUM.
0-9,Period and are the only allowed characters.
3. BRT/NORM/DRK
Mutually exclusive parameters that define the
intensity of the field.
4. IC
Insert Cursor. Cursor will be positioned on display
of map. If IC is specified in more than one field of a map, the cursor will be
placed in the last field.
5. FSET
Independent of whether the field is modified or
not, it will be passed to the program. MDT is set for the field.
JUSTIFY.
RIGHT is the default value. Code LEFT for numeric fields.
PICIN and PICOUT.
It defines the Picture clause of the symbolic map in COBOL and useful for
numeric field editing.
INITIAL.
Page:176
The default value of the field is coded here. When the MAP is sent, this value
will appear in the field. The constant information like TITLE is coded using INITIAL
keyword of field definition. To avoid data traffic, these constant information fields
should not be coded without LABEL parameter. If there is no LABEL parameter, then
symbolic map will not generated for those fields as they are unnamed fields.
So four fields are generated for every named field in the BMS.
Fieldname+L
It has length of the field entered by the user during the input operation.
Fieldname+I
It is the actual input field that carries the entered information. The value of
this field is X00 if no data is entered. The space corresponds to X40.
Fieldname+A
It is attribute byte. It defines the attributes of the field.
Fieldname+F
It is a flag byte. It has X00 by default. It will be set to X80 if the user
modifies the field but no data is sent. That is when the user pressed clear key over
the field.
Fieldname+O
This field should be populated in the program before sending the screen to
display.
Please note that the words INPUT (RECEIVE) and OUTPUT (SEND) are with respect to
program.
Page:177
Page:178
Move 1 to length of the field where you want place the cursor and send the
map with CURSOR option. This is device independent method and recommended to
control the cursor position dynamically.
Relative Positioning.
Send the map with CURSOR(value) option. This will set the cursor at the
position coded by value, relative to the first column of the screen. This is device
dependant method of dynamic cursor positioning and not recommended. When there
is change in screen layout, the program needs to be modified.
CURSOR(30) will place the cursor in the 30th column. (First line first column is
ZERO).
Cursor Position.
EIBPOSN of DFHEIBLK contains the offset of cursor position in the screen
when the data was transferred to program from the terminal.(relative to zero). It is
half word binary field.
Page:179
Page:180
To use keys instead paging commands, the keys should be mapped with paging
commands in the System Initialization Table (SIT). Usually there will be a mapping
for PF7 and PF8 with page up and page down commands respectively. PURGE
MESSAGE is used to purge the message that is accumulated but not yet send.
Format of HEADER and TRAILER
If TEXT-HEADER is the variable used in HEADER command and the header is
ORDER INVENTORY Page nn- , then the variable TEXT-HEADER should be defined
as follows in working-storage section.
01 TEXT-HEADER.
05 FILLER PIC S9(4) COMP VALUE 27. => Length of the text.
05 FILLER PIC X VALUE &.
=> Character identifying automatic
page number in the text.
05 FILLER PIC X.
Page:181
Message Routing
A message can be routed to one or more terminals other than the direct
terminal with which the program has been communicating. The message eligible for
message routing is, a message constructed by the SEND MAP command with the
ACCUM option.
ROUTE command establishes the message routing environment and the SEND
PAGE command issued after ROUTE command sends the message to the destination.
Syntax:
EXEC CICS ROUTE [LIST(data-area)], [OPCLASS(data-area)],
[INTERVAL(hhmmss)|TIME(hhmmss)],
[TITLE(data-area)], [ERRTERM(name)]
END-EXEC.
LIST and OPCLASS name the route list and operator class codes respectively.
INTERVAL / TIME determines the actual timing of message delivery in the time
interval or the time respectively.
TITLE names the title field defined in the working storage section and
ERRTERM specify the terminal ID where the error message (if any) to be sent.
Route list is prepared in working storage using the following convention.
TTTTrrOOOsrrrrrr 8 bytes named r are declared as spaces. TTTT names the
terminal identifier as in TCT and OOO specify the operator id as in SNT. s is status
flag. Code as many 16 bytes fields as the destinations and indicate end of route list is
with the declaration of half word binary field with 1 value.
The message can be routed to every terminal at which users of the specified
operator class is signed on. This is done using OPCLASS.
Program Design
Pseudo logic in CICS programs is achieved in three ways.
1.Multiple Programs and Multiple Transaction Ids.
The conversation program is logically and physically divided into multiple
programs and each program is registered with one transaction ID. Each program
issues RETURN command with next transaction ID. So after the screen is sent, the
first program will be terminated. Once the user filled up the information in screen
and press ENTER, the next program of the next transaction ID (returned by the first
program) gets control and the same process is followed for n programs.
Advantage:
Easy development and maintenance.
Disadvantage: Possible Code redundancy and number of PCT and PPT entries.
2.Multiple Transactions and single program.
First paragraph of the procedure division controls the flow the different
sections of the program based on the transaction identifier stored in EIBTRNID.
Every section ends with RETURN command with next transaction to be invoked on
completion of the screen information by the user.
Thus the conversational program is logically divided than physical division.
Advantage:
PPT entries are less compared to case 1.
Disadvantage: Number of PCT entries are still the same as case 1.
3.One transaction and One program. (Preferred way)
Page:182
Page:183
used to share huge amount of information across the transactions. Refer Queues
section for more details.
3. TCTCA, TWA, CWA.
TWA Transaction work area One per task - Work area associated with the
task.
TCTUA Terminal Control Table User Area Work area associated with a
terminal and defined as one per terminal in TCT.
CWA - Common work area System work area defined by system
programmer in SIT. (System Initialization Table)
We can use TWA, TCTUA and CWA for sharing the information across the
transactions.
External address ability using BLL CELLS of OS/VS COBOL
Base Locator for Linkage is used to access the memory outside you working
storage. If it is used in input commands like READ, RECEIVE, the performance will be
good as we would be directly accessing the input buffer than working storage item.
If you want to access system areas like CWA, TCTUA or CSA that exist outside your
program, then you should use BLL.
Code Linkage section as below:
01 PARM-LIST.
05 FILLER
PIC S9(08) COMP.
05 TCTUA-PTR
PIC S9(08) COMP.
05 TWA-PTR
PIC S9(08) COMP.
01 TCTUA-AREA.
05 TCTUA-INFORMATION PIC X(m).
01 TWA-AREA.
05 TWA-INFORMATION PIC X(n).
The mapping between the pointers and data area is done in linkage section as
follows.
First filler is points the current 01 level, which is PARM-LIST itself.
TCTUA-PTR points to next 01 level, which is TCTCA-PTR
TWS-PTR points to next 01 level, which is TWA-PTR.
In the procedure division, code the following:
EXEC CICS ADDRESS TWA(TWA-PTR) TCTUA(TCTUA-PTR) END-EXEC.
SERVICE RELOAD TCTUA-AREA
SERVICE RELOAD TWA-AREA
This will store the pointer of TWA in TWA-PTR and TCTUA in TCTUA-PTR.
As the mapping between BLL cells and areas is already done in linkage section,
TCTUA-AREA can access m bytes of TCTUA, which exist outside your working storage
and similarly TWA-AREA to access n bytes of TWA, which exist outside your working
storage area.
To ensure the addressability, SERVICE RELOAD statement should follow
whenever the content of BLL cell is changed in anyway. So there should be a
SERVICE RELOAD statement after the ADDRESS statement as follows.
Page:184
Enhancement by VS COBOL2
ADDRESS OF operator of VS COBOL2 easies the access of external items.
The above requirement can be met in VS COBOL2 as follows. There is no need for
PARM-LIST or SERVICE RELOAD.
LINKAGE SECTION.
01 TCTUA-AREA.
05 TCTUA-INFORMATION PIC X(m).
01 TWA-AREA.
05 TWA-INFORMATION PIC X(n).
PROCEDURE DIVISION.
EXEC CICS ADDRESS TCTUA (ADDRESS OF TCTUA-AREA)
TWA(ADDRESS OF TWA-AREA)
END-EXEC.
ADDRESS OF maps the TCTUA-AREA with TCTUA exists outside your working
storage.
ASSIGN statement
It is used to access the system value outside of application program.
Some of them are length of common areas, user-id.
CWALENG, TCTUALENG, TWALENG assign to full word binary working storage item.
USERID
- assign to X(08) field.
EXEC CICS ASSIGN
USERID(WS-USERID)
END-EXEC
The above command fetches the user id into WS-USERID.
1.
2.
3.
4.
Page:185
Page:186
Ws-length is length of ws-area and ws-area has the information that needs to
be passed to LINK/XCTL program. VS COBOL 2 allows reentrant program and so
whatever is achieved using LINK or XCTL can be also achieved by COBOL CALL
statement. The called programs should be attached to the main program during link
edit and so the size of the load module will be large in this case.
Prefer XCTL whenever possible as it involves fewer overheads. If that is not
possible then based on sub-program size and possible modification, go for LINK or
CALL. If the size of the program is less and used by only very few programs go for
CALL as it wont increase the size much at the same time application will run fast.
If the program is expecting more changes, then calling program also need to
be compiled for every change in sub program. In case of LINK, the compilation of
linked programs is enough. Called CICS programs need not be registered in PPT
whereas LINK and XCTL programs should be.
LOAD and RELEASE
LOAD is used to load the program or table that is compiled or assembled, link
edited and registered in PPT. It is mostly used for loading the assembler table.
EXEC CICS LOAD PROGRAM(PGM1) SET (ADDRESS OF LK-ITEM)
LENGTH(WS-LENGTH)
END-EXEC
Program PGM1 is loaded and the address of the program or table is mapped
to LK-ITEM and so the table can be accessed using the linkage are LK-ITEM. The size
of the linkage item is WS-LENGTH. If HOLD keyword is coded in the LOAD
command, then loaded program or table will be permanently resident until explicitly
released. If it is not mentioned, then termination of the task release the program or
table.
EXEC CICS RELEASE PROGRAM(PGM1) END-EXEC is used to release the
program or table.
Page:187
The above checking is done in COBOL and this kind of check has to be done
after every RECEIVE command to properly route the flow. CICS provides its own
routing processing based on the key pressed by HANDLE AID command. This will be
effective through out the program and reduce the code redundancy. The routing will
be automatically invoked on every RECEIVE.
EXEC CICS HANDLE AID
DFHENTER(PARA-1)
DFHPF1(PARA-2)
ANYKEY(PARA-3)
END-EXEC.
HANDLE CONDITION and NO HANDLE
Every CICS command has its own exception list. One example is MAPFAIL
exception of RECEIVE command. HANDLE CONDITION is used to transfer control to
the proper procedure on expected exceptions. HANDLE condition cannot track
program interrupted ABENDS like S0C4 or S0C7. It deals only with exceptions of
CICS commands.
The conditions handled are effective from where it appears to the end of the
program. One Handle Condition can be overridden by another condition. The main
program conditions will not be effective in sub-programs.
EXEC CICS HANDLE CONDITION
MAPFAIL(PARA-1)
PGMIDERR(PARA-2)
LENGERR(PARA-3)
ERROR(PARA-X)
END-EXEC.
= >Any error condition other than MAPFAIL,PGMIDERR and LENGERR will transfer
the control to PARA-X.
If you want to reset the diversion done by previous HANDLE condition, code
the condition name but dont mention the paragraph name. Maximum 12 conditions
can be coded in one HANDLE CONDITION statement.
If you want to deactivate all the conditions for a particular CICS command,
code NOHANDLE. There are possibilities for infinite loop when the CICS command in
the exception routine of the HANDLE CONDITION ends with same exception. In the
above example, if the there is LENGERR in PARA-3 for any of the CICS command
coded there, then the control again come to PARA-3 and forms infinite loop. In such
cases, NOHANDLE will be useful (in the ABEND routine CICS commands).
IGNORE CONDITION.
The syntax is same as HANDLE CONDITION but it causes no action to be
taken if the specified condition occurs in the program. The control will be returned to
the next instruction following the command, which encountered the exceptional
condition. It will be effective from the place where it appears to the end of the
program or until any HANDLE condition overrides it.
Maximum 12 conditions can be coded in one IGNORE CONDITION statement.
RESP
Like the file status verification of batch COBOL program, SQLCODE verification
of DB2 program, the success or failure of CICS command can be done using COBOL
statements and this give more structured look for the program.
Page:188
To achieve this, code RESP along with CICS command. CICS will place the
result into the variable coded in RESP. It can be checked against DFHRESP(NORMAL)
or DFHRESP(condition) and appropriately control the flow following the command.
EXEC CICS RECEIVE MAP(A) MAPSET(B) RESP(WS-RCODE) END-EXEC
EVALUATE WS-RCODE
WHEN DFHRESP(NORMAL) PERFORM PROCESS-PARA
WHEN DFHRESP(MAPFAIL) PERFORM PARA-1
WHEN OTHER PERFORM PARA-X
END-EVALUATE.
WS-RCODE should be defined as full word binary in working storage. If you
code RESP, then HANDLE CONDTION will not be effective for those CICS commands.
PUSH and POP.
They are used to suspend and reactivate respectively, all the HANDLE
CONDITION requests currently in effect. In real life programming, you may want to
deactivate all the active handle conditions and diversions for a subroutine (section)
embedded in the program. This can be achieved using PUSH HANDLE. When you
come out of that section, you can reactivate all the pushed commands and that can
be done using POP HANDLE.
HANDLE ABEND
HANDLE CONDITION command intercepts only abnormal conditions of the
CICS command execution whereas HANDLE ABEND takes care of any abnormal
termination within the program.
EXEC CICS HANDLE ABEND
LABEL(ABEND-ROUTINE)|RESET|CANCEL
END-EXEC.
LABEL is to activate an exit. CANCEL is used to cancel the previously
established HANDLE ABEND request. RESET is to reactivate the previously cancelled
HANDLE ABEND request.
USER-ABEND
In batch COBOL, the user ABENDS can be thrown by calling the assembler
routine ILBOABN0 using AB-CODE whereas AB-CODE is working storage field of half
word binary.
The following command is used to throw user ABENDS in CICS.
EXEC CICS ABEND ABCODE(9999)
END-EXEC is used to throw user ABEND 9999.
FILE-HANDLING.
CICS supports only VSAM and BDAM files. All the files, that you want to use in
your CICS application, should be registered in FCT with their complete attributes.
CICS commands refer the FCT entry name for doing operation on the file.
As all the files are already declared and defined in tables, coding File-Control
Paragraph of ENVIRONMENT division or FILE SECTION of DATA DIVISION is
meaningless and not required. Thus CICS frees the application program from any
data dependant coding.
Page:189
The unit of IO during READ is one control interval. So even you read one
record into your program, the complete control interval is read into main memory
buffer. The size of control interval is preferred to be large for sequential processing
and small for random processing.
The file should be OPEN to issue an I-O command. The status of the file can
be queried using the master transaction CEMT. It can be OPENED, CLOSED, ENABLED
or DISABLED. This explicit opening or closing can be done using CEMT.
But this needs human intervention.
CICS Version 1.7 introduces automatic opening of the file if it is not in open
status during the access. It is always better to close it the when you no longer need
it. DFOC (Dynamic File Open Close) can be done in program using the SET command
or linking to DFHEMTP.
DFOC BY LINK COMMAND
The application program links the master terminal program (DFHEMTP) and
passes CEMT parameter data required for the file open/close through COMMAREA.
Receiving the data, DFHEMTP program opens/closes the files specified, after which
the control is returned to the application program.
EXEC CICS LINK PROGRAM(DFHEMTP)
COMMAREA(WS-COMMAREA)
LENGTH(WS-LENGTH)
END-EXEC
The file open command is written into WS-COMMAREA and its length is
populated in WS-LENGTH.
File Open Command: SET DATASET(FCT-NAME) OPENED
DFOC BY SET COMMAND
EXEC CICS SET
DATASET(name) | FILE(name)
OPEN|CLOSED
END-EXEC
DFOC BY BATCH JOB
DFOC can be achieved from a batch job, which issues the CEMT as the data
for the JES Modify (F) command against the system console terminal.
// F cicsjob, CEMT SET DATASET(FCTNAME) OPENED
RANDOM ACCESS OF FILES
READ: EXEC CICS
S1
S2
S3
S4
S5
S6
GTEQ | EQUAL
Page:190
S7
END-EXEC.
S1.
Reads the file and if update is intended then code UPDATE clause. This will get
exclusive access over the complete control interval where the specific record exists,
during the READ. Thus CICS ensures data integrity.
S2.
Read the file into working storage variable WS-ITEM. Alternatively, the
address of the record in the input-output area can be mapped to linkage section
variable by using SET command. Performance-wise SET is better than READ INTO.
VS COBOL2 supports ADDRESS of keyword and makes the code easier. In the prior
version COBOL, PARM list and SERVICE RELOAD should be used to achieve the same.
S3.
After the READ, the LENGTH of the record READ is updated into WS-LENGTH.
WS-LENGTH is the working storage half word binary item.
S4.
Key of the record to be read is moved into WS-KEY, a working storage item
and a part of record structure (WS-ITEM).
S5.
If partial key is used, then the length of the partial key should be moved to
WS-KEY-LENGTH and the keyword GENERIC should be coded. This is optional for fullkey read.
S6.
Remote system name where the request to be directed, is coded here. (1-4
character name)
S7.
EQUAL is default. GTEQ can be coded when you know the full key, but you are
not sure the record with that key exists in the file.
REWRITE
EXEC CICS REWRITE FILE(FCT-NAME)
FROM(WS-ITEM)
LENGTH(WS-LENGTH)
SYSID(SYSTEM-NAME)
END-EXEC.
To release the exclusive access acquired during the READ with UPDATE, the
record should be REWRITTEN using the above syntax. The parameters are selfexplanatory.
After you read record, if you feel that you dont want to update it, then inform
the same to CICS by issuing UNLOCK command so that CICS release the lock
acquired by you on the record.
EXEC CICS UNLOCK FILE(FCT-NAME) END-EXEC.
WRITE
The syntax of WRITE is same as REWRITE. There is a parameter RIDFLD with
Key-value is coded in the WRITE command. (Like S4 for READ command)
Page:191
When you want to add a set of records whose keys are in ascending order,
then qualify your first WRITE with another parameter MASSINSERT. This will get
exclusive control over the file and provide high performance during the mass insert.
If you use MASSINSERT, then you should issue UNLOCK to inform the
completion of your additions. CICS releases the file on UNLOCK.
DELETE
1.The record read using READ with UPDATE can be deleted using
EXEC CICS DELETE DATASET(FCT-NAME) END-EXEC.
2.The record in the file can directly be deleted by providing complete key.
EXEC CICS DELETE DATASET(FCT-NAME) RIDFLD(WS-KEY) END-EXEC.
3.Group of records can be deleted using partial key. After the deletion, the number of
records deleted is updated in the variable coded in NUMREC parameter. (WS-DELCOUNT)
EXEC CICS DELETE DATASET(FCT-NAME)
RIDFLD(WS-KEY)
KEYLEGNTH(WS-KEY-LENGTH) GENERIC
NUMREC(WS-DEL-COUNT)
END-EXEC
SEQUENTIAL READ:
Sequential access of VSAM file under CICS is called Browsing. It has FIVE
Commands associated with it.
STARTBR.
It establishes a position to start browsing.
EXEC CICS STARTBR FILE(FCT-NAME) RIDFLD(WS-KEY)
KEYLENGTH(WS-KEY-LENGTH) GENERIC
REQID(Integer-value)
SYSID(SYSTEM-NAME)
GTEQ | EQUAL
END-EXEC.
Meaning of the parameters is same as READ operation explanation. When you
want to do multiple browsing concurrently over the same file, then use REQID. The
first STARTBR can be coded with REQID(1) and the second STARTBR can be coded
with REQID(2).
One browse operation occupies one string of VSAM. If all VSAM strings are
exhausted for one VSAM file, then the other transactions will have to wait for one of
the strings to become free. So it is not recommended to use multiple browsing.
Instead, once the browsing is completed, using RESETBR set the position to another
place and start browsing. The syntax of RESETBR is same as STARTBR.
Page:192
Page:193
ESDS-WRITE
The syntax is same as KSDS WRITE command. Qualify the WRITE with RBA
option. The record will be appended to the file and the RBA of the record is placed
into RBA field mentioned in the WRITE command. (ESDS-RBA)
MOVE 225 TO WS-LENGTH
EXEC CICS WRITE FILE(FCT-NAME) RIDFLD(ESDS-RBA) LENGTH(WS-LENGTH) RBA
END-EXEC.
ESDS-RANDOM ACCESS
If you know the RBA value of the record you want to access, then you can
randomly access the ESDS file. The syntax is same as READ of KSDS file but qualify
it with RBA option. RIDFLD should point to full-word binary item pre-filled with RBA
of the record to be accessed.
RRDS ACCESS
RRDS file can be accessed using RRN in place of RBA and populating the
RIDFLD with RRN number. The field should be full word binary.
Alternate Index access
Register the path in FCT and use the path name as file name to read the file
randomly using alternate index. Please refer VSAM section of the book for
understanding what exactly PATH is.
As the alternate key need not be unique, DUPKEY condition is very common
during the alternate index usage and so it has to be properly handled.
Page:194
ASKTIME
EIBDATE and EIBTIME have the values at task initiation time. Upon the
completion of ASKTIME, these two fields are populated with current date and time.
EXEC CICS ASKTIME END-EXEC.
FORMATTIME
FORMATTIME is used to receive the information of date and time in various formats.
EXEC CICS FORMATTIME FORMAT-TYPE(data-area) END-EXEC
Format-type: YYDDD, YYMMDD, YYDDMM, MMDDYY, DDMMYY, DAYOFWEEK,
DAYOFMONTH, MONTHOFYEAR, YEAR, TIME, TIMESEP, and
DATESEP.
Example:
EXEC CICS
FORMATTIME MMDDYY(WS-DATE) DATESEP TIME(WS-TIME) TIMESEP
END-EXEC
WS-DATE contains the current date in MM/DD/YY (Default DATESEP is /)
WS-TIME contains the current time in HH:MM:SS (Default TIMESEP is :)
DELAY and SUSPEND
It is used to delay the processing of a task for a specified time interval or until
the specified time. If your task is doing some heavy CPU bounded work, it is a good
to place the DELAY command in order to allow the other tasks to proceed.
EXEC CICS DELAY INTERVAL(HHMMSS) | TIME(HHMMSS) END-EXEC.
SUSPEND is used to suspend a task. During the execution of the command,
the task will be suspended and the control will be given to other tasks with higher
priority. As soon as all higher priority tasks have been executed, control will be
returned to the suspended task.
EXEC CICS SUSPEND END-EXEC.
POST and WAIT EVENT
POST is used to request notification when the specific time has expired and
the WAIT EVENT command is used to wait for an event to occur. Usually these two
commands are used in pair, as alternative to the DELAY command.
EXEC CICS POST INTERVAL(HHMMSS) | TIME(HHMMSS)
SET (ADDRESS OF POST-ECB) END-EXEC.
END-EXEC
EXEC CICS WAIT EVENT ECAADDR(ADDRESS OF POST-ECB)
END-EXEC
CANCEL
It is used to cancel the interval control commands such as DELAY, POST and
START which have been issued. The interval commands to be cancelled are identified
using REQID.
EXEC CICS START REQID(STARTS) END-EXEC.
EXEC CICS CANCEL REQID(STARTS) END-EXEC.
Page:195
Page:196
During the program processing, if you want to roll back the changes made by
you, then you can code SYNPOINT with ROLLBACK. This will backed out all resource
modifications done since the last sync point.
EXEC CICS SYNCPOINT ROLLBACK END-EXEC.
SYNCPOINT in CICS-DB2 environment is called as two-phase commit.
The changes made in the resources that are directly under the control of CICS are
committed in the first phase and the changes made to DB2 environment are
committed in the second phase.
Queues
There are two types of queues available in CICS.
1.Temporary Storage Queue (TSQ)
2.Transient Data Queue (TDQ)
Their properties are listed below:
TSQ
TDQ
Page:197
TSQ are preferred over TDQ for data passing. TDQ is used for batch interface
or on ATI requirement.
Page:198
Page:199
ABEND once you identified the statement. If you could not, then you may check the
value of the field at the time of ABEND as follows.
Finding the value of the field in error.
1.Find the Base Locator (BL), Displacement and length of the field from the MAP of
compilation listing.
2. Find the register number corresponding to the base locator from the bottom of
compilation listing.
3. Find the address of this register in the dump.
4. Add the displacement to the register of the address to get the address of the field.
5. Locate the address in the dump to get the value of the field.
All the VSAM files, PATH between base cluster and alternate index and any
BDAM datasets are to be registered in FCT to access them in the application
program.
DFHFCT
TYPE
=DATSET| FILE,
ACCMETH=BDAM | VSAM,
DATASET= name | FILE=name,
SERVREQ = (ADD, BROWSE, DELETE, READ, UPDATE),
[FILESTAT= (ENABLED|DISABLED, OPENED|CLOSED)],
[BUFND= n+1],
[BUFNI=n],
[STRNO=n],
[DSNAME=name],
[DISP=OLD | SHR],
[BASE=name],
Page:200
PPT
All the CICS application programs and BMS map sets must be registered in
PPT. If the program is not registered here, then the program is unrecognizable to
CICS.
DFHPPT
TYPE=ENTRY,
PROGRAM=name| MAPSET= name
[PGMLANG=(ASSEMBLER | COBOL | PLI)],
options..
ASSEMBLER is the default program language. All the map-sets are considered as
written in assembler.
PCT
The control information of all CICS transactions must be registered in
program control table (PCT).
DFHPCT
TYPE=ENTRY,
TRANSID=name,
TASKREQ=xxxx,
PROGRAM=name,
[DTIMOUT=mmss],
[RTIMOUT=mmss],
[RESTART= NO | YES],
[TRANSEC=1 | DECIMAL],
[DUMP=YES|NO],
Other options..
(PF1-PF24, PA1-PA3)
(1-64)
TRANSID
Transaction identifier name (1-4 characters)
PROGRAM
Name of the program associated with the transaction.
TASKREQ
- Key associated with the transaction initiation.
DTIMOUT
- Defines the timeout (waiting time) in case of deadlock.
RTIMOUT
- Defines the time limit within which the user has to respond. If
he didnt then the task will be cancelled.
RESTART
Automatic transaction is applied after the completion of the
transaction recovery from the abnormal termination of the transaction.
TRANSEC
Transaction security code.
DUMP
- Whether DUMP is to be taken in case of abnormal termination
of the transaction.
TCT
All the terminals, which are to be under CICS control, should be registered in
TCT. Terminal Control Program uses this table for identifying the terminals and
performs all input/output operations against these terminals.
DFHTCT
TYPE=ENTRY,
Page:201
ACCMETH=VTAM,
TRMIDNT=name,
TRMTYPE=type,
FEATURE=(UCTRAN,..)
Options..
TRMIDNT is (1-4 characters) terminal ID.
TRMTYPE is type of terminal. Ex. IBM3270.
FEATURE indicates the terminal services CICS offers. UCTRAN is used to
translate all
the lowercase characters to upper case characters.
DCT
Page:202
OPEN=INITIAL means the file will be open at the CICS start-up time and
DEFERED means the file will be closed until it is specifically opened by CEMT.
DSCNAME defines data control block and for one DSCNAME, a corresponding
DFHDCT entry must be made with TYPE=SDSCI and the same DSCNAME, which in
effect indicates DDNAME of extra partition dataset in JCL or CICS JOB itself.
TYPFILE indicates the file to be INPUT, OUTPUT or READ BACKWARD
(RDBACK)
How to submit JCL from a CICS program?
JCL can be submitted using the SPOOLOPEN, SPOOLWRITE and SPOOLCLOSE
commands.
NODE('*') specifies a destination node to send the JCL to. USERID is set to
the name of the internal reader INTRDR. A unique token will be allocated by CICS
when the SPOOLOPEN is executed and placed in the field you specify using
TOKEN(report_token). The token will be used as a sending field on subsequent
commands. The token will be 8 bytes long.
To write each line of the job to the spool use the SPOOLWRITE command.
EXEC CICS
SPOOLWRITE FROM(io_area) TOKEN(report_token)
RESP(resonse_field)
END-EXEC.
The "io_area" field should be the name of a data item containing the line of
JCL. The "report_token" field should be the same as the 8 byte token returned from
SPOOLOPEN. An end of job statement ('//' or '/*EOF') should be written as the last
line.
Finally you must close the spool using SPOOLCLOSE. (Note if you do not
explicitly close the spool it will be closed when the transaction terminates,
However it is good practice to close the spool explicitly)
EXEC CICS
SPOOLCLOSE TOKEN(report_token) RESP(response_field)
END-EXEC.
Again the "report_token" field must be the same as the one allocated at
SPOOLOPEN.
Notes:
The RESP option should be coded on all the Commands and it is
recommended that you also code RESP2, because additional information is
returned for some exception conditions in this field.
DCT entries and JCL DD statements are not needed when using this method.
Page:203
Note that in order to use this method DFHSIT SPOOL=YES must be coded in
the CICS System Initialization Table. Check with your friendly local SysProg if
you are unsure.
Under OS/VS COBOL the SPOOLWRITE command had to have FLENGTH
specified. FLENGTH specifies the length of the data being written in a fullword
binary field. This field is optional in newer versions, if it is omitted then the
length is assumed to be the length of the data item (io-area) specified in
FROM.
Be aware of the performance considerations for writing to the spool, IBM says
"Transactions that process SYSOUT data sets larger than 1000 records, either for
INPUT or for OUTPUT, are likely to have a performance impact on the rest of CICS".
(Source: CICS/ESA V4R1 Application Programming Reference SC33-1170-00)
Page:204